00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 632 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3297 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.248 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.260 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.273 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:07.273 > git config core.sparsecheckout # timeout=10 00:00:07.284 > git read-tree -mu HEAD # timeout=10 00:00:07.301 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:07.320 Commit message: "packer: Add bios builder" 00:00:07.321 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:07.414 [Pipeline] Start of Pipeline 00:00:07.427 [Pipeline] library 00:00:07.428 Loading library shm_lib@master 00:00:07.428 Library shm_lib@master is cached. Copying from home. 00:00:07.440 [Pipeline] node 00:00:07.447 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:07.449 [Pipeline] { 00:00:07.461 [Pipeline] catchError 00:00:07.463 [Pipeline] { 00:00:07.475 [Pipeline] wrap 00:00:07.482 [Pipeline] { 00:00:07.488 [Pipeline] stage 00:00:07.490 [Pipeline] { (Prologue) 00:00:07.506 [Pipeline] echo 00:00:07.507 Node: VM-host-SM17 00:00:07.514 [Pipeline] cleanWs 00:00:07.522 [WS-CLEANUP] Deleting project workspace... 00:00:07.523 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.528 [WS-CLEANUP] done 00:00:07.702 [Pipeline] setCustomBuildProperty 00:00:07.786 [Pipeline] httpRequest 00:00:07.810 [Pipeline] echo 00:00:07.812 Sorcerer 10.211.164.101 is alive 00:00:07.821 [Pipeline] httpRequest 00:00:07.825 HttpMethod: GET 00:00:07.826 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.826 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.842 Response Code: HTTP/1.1 200 OK 00:00:07.842 Success: Status code 200 is in the accepted range: 200,404 00:00:07.843 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.377 [Pipeline] sh 00:00:13.659 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.676 [Pipeline] httpRequest 00:00:13.706 [Pipeline] echo 00:00:13.707 Sorcerer 10.211.164.101 is alive 00:00:13.716 [Pipeline] httpRequest 00:00:13.721 HttpMethod: GET 00:00:13.722 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:13.722 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:13.723 Response Code: HTTP/1.1 200 OK 00:00:13.723 Success: Status code 200 is in the accepted range: 200,404 00:00:13.724 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:32.040 [Pipeline] sh 00:00:32.360 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:35.650 [Pipeline] sh 00:00:35.928 + git -C spdk log --oneline -n5 00:00:35.928 dbef7efac test: fix dpdk builds on ubuntu24 00:00:35.928 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:35.928 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:35.928 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:35.928 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:35.947 [Pipeline] withCredentials 00:00:35.957 > git --version # timeout=10 00:00:35.969 > git --version # 'git version 2.39.2' 00:00:35.983 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.985 [Pipeline] { 00:00:35.994 [Pipeline] retry 00:00:35.996 [Pipeline] { 00:00:36.013 [Pipeline] sh 00:00:36.291 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:39.592 [Pipeline] } 00:00:39.616 [Pipeline] // retry 00:00:39.622 [Pipeline] } 00:00:39.642 [Pipeline] // withCredentials 00:00:39.653 [Pipeline] httpRequest 00:00:39.673 [Pipeline] echo 00:00:39.675 Sorcerer 10.211.164.101 is alive 00:00:39.683 [Pipeline] httpRequest 00:00:39.687 HttpMethod: GET 00:00:39.688 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:39.688 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:39.692 Response Code: HTTP/1.1 200 OK 00:00:39.693 Success: Status code 200 is in the accepted range: 200,404 00:00:39.693 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:23.293 [Pipeline] sh 00:01:23.572 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.487 [Pipeline] sh 00:01:25.767 + git -C dpdk log --oneline -n5 00:01:25.767 caf0f5d395 version: 22.11.4 00:01:25.767 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:25.767 dc9c799c7d vhost: fix missing spinlock unlock 00:01:25.767 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:25.767 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:25.783 [Pipeline] writeFile 00:01:25.797 [Pipeline] sh 00:01:26.077 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:26.089 [Pipeline] sh 00:01:26.371 + cat autorun-spdk.conf 00:01:26.371 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.371 SPDK_TEST_NVMF=1 00:01:26.371 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.371 SPDK_TEST_URING=1 00:01:26.371 SPDK_TEST_USDT=1 00:01:26.371 SPDK_RUN_UBSAN=1 00:01:26.371 NET_TYPE=virt 00:01:26.371 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:26.371 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:26.371 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.378 RUN_NIGHTLY=1 00:01:26.380 [Pipeline] } 00:01:26.398 [Pipeline] // stage 00:01:26.414 [Pipeline] stage 00:01:26.417 [Pipeline] { (Run VM) 00:01:26.431 [Pipeline] sh 00:01:26.712 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.712 + echo 'Start stage prepare_nvme.sh' 00:01:26.712 Start stage prepare_nvme.sh 00:01:26.712 + [[ -n 7 ]] 00:01:26.712 + disk_prefix=ex7 00:01:26.712 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:01:26.712 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:01:26.712 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:01:26.712 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.712 ++ SPDK_TEST_NVMF=1 00:01:26.712 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.712 ++ SPDK_TEST_URING=1 00:01:26.712 ++ SPDK_TEST_USDT=1 00:01:26.712 ++ SPDK_RUN_UBSAN=1 00:01:26.712 ++ NET_TYPE=virt 00:01:26.712 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:26.712 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:26.712 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.712 ++ RUN_NIGHTLY=1 00:01:26.712 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:26.712 + nvme_files=() 00:01:26.712 + declare -A nvme_files 00:01:26.712 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.712 + nvme_files['nvme.img']=5G 00:01:26.712 + nvme_files['nvme-cmb.img']=5G 00:01:26.712 + nvme_files['nvme-multi0.img']=4G 00:01:26.712 + nvme_files['nvme-multi1.img']=4G 00:01:26.712 + nvme_files['nvme-multi2.img']=4G 00:01:26.712 + nvme_files['nvme-openstack.img']=8G 00:01:26.712 + nvme_files['nvme-zns.img']=5G 00:01:26.712 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.712 + (( SPDK_TEST_FTL == 1 )) 00:01:26.712 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.712 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:26.712 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.712 + for nvme in "${!nvme_files[@]}" 00:01:26.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:28.089 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.089 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:28.089 + echo 'End stage prepare_nvme.sh' 00:01:28.089 End stage prepare_nvme.sh 00:01:28.102 [Pipeline] sh 00:01:28.383 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.383 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:28.383 00:01:28.383 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:01:28.383 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:01:28.383 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:28.383 HELP=0 00:01:28.383 DRY_RUN=0 00:01:28.383 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:28.383 NVME_DISKS_TYPE=nvme,nvme, 00:01:28.383 NVME_AUTO_CREATE=0 00:01:28.383 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:28.383 NVME_CMB=,, 00:01:28.383 NVME_PMR=,, 00:01:28.383 NVME_ZNS=,, 00:01:28.383 NVME_MS=,, 00:01:28.383 NVME_FDP=,, 00:01:28.383 SPDK_VAGRANT_DISTRO=fedora38 00:01:28.383 SPDK_VAGRANT_VMCPU=10 00:01:28.383 SPDK_VAGRANT_VMRAM=12288 00:01:28.383 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.383 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.383 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.383 SPDK_OPENSTACK_NETWORK=0 00:01:28.383 VAGRANT_PACKAGE_BOX=0 00:01:28.383 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:28.383 FORCE_DISTRO=true 00:01:28.383 VAGRANT_BOX_VERSION= 00:01:28.383 EXTRA_VAGRANTFILES= 00:01:28.383 NIC_MODEL=e1000 00:01:28.383 00:01:28.383 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt' 00:01:28.383 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:31.668 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.927 ==> default: Creating image (snapshot of base box volume). 00:01:32.186 ==> default: Creating domain with the following settings... 00:01:32.186 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721988405_8acb236681adf9e9d3fe 00:01:32.186 ==> default: -- Domain type: kvm 00:01:32.186 ==> default: -- Cpus: 10 00:01:32.186 ==> default: -- Feature: acpi 00:01:32.186 ==> default: -- Feature: apic 00:01:32.186 ==> default: -- Feature: pae 00:01:32.186 ==> default: -- Memory: 12288M 00:01:32.186 ==> default: -- Memory Backing: hugepages: 00:01:32.186 ==> default: -- Management MAC: 00:01:32.186 ==> default: -- Loader: 00:01:32.186 ==> default: -- Nvram: 00:01:32.186 ==> default: -- Base box: spdk/fedora38 00:01:32.186 ==> default: -- Storage pool: default 00:01:32.186 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721988405_8acb236681adf9e9d3fe.img (20G) 00:01:32.186 ==> default: -- Volume Cache: default 00:01:32.186 ==> default: -- Kernel: 00:01:32.186 ==> default: -- Initrd: 00:01:32.186 ==> default: -- Graphics Type: vnc 00:01:32.186 ==> default: -- Graphics Port: -1 00:01:32.186 ==> default: -- Graphics IP: 127.0.0.1 00:01:32.186 ==> default: -- Graphics Password: Not defined 00:01:32.186 ==> default: -- Video Type: cirrus 00:01:32.186 ==> default: -- Video VRAM: 9216 00:01:32.186 ==> default: -- Sound Type: 00:01:32.186 ==> default: -- Keymap: en-us 00:01:32.186 ==> default: -- TPM Path: 00:01:32.186 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:32.186 ==> default: -- Command line args: 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:32.186 ==> default: -> value=-drive, 00:01:32.186 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:32.186 ==> default: -> value=-drive, 00:01:32.186 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.186 ==> default: -> value=-drive, 00:01:32.186 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.186 ==> default: -> value=-drive, 00:01:32.186 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:32.186 ==> default: -> value=-device, 00:01:32.186 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.446 ==> default: Creating shared folders metadata... 00:01:32.446 ==> default: Starting domain. 00:01:34.351 ==> default: Waiting for domain to get an IP address... 00:01:49.293 ==> default: Waiting for SSH to become available... 00:01:50.669 ==> default: Configuring and enabling network interfaces... 00:01:55.937 default: SSH address: 192.168.121.59:22 00:01:55.937 default: SSH username: vagrant 00:01:55.937 default: SSH auth method: private key 00:01:57.311 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:05.428 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:10.704 ==> default: Mounting SSHFS shared folder... 00:02:11.637 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:11.637 ==> default: Checking Mount.. 00:02:13.012 ==> default: Folder Successfully Mounted! 00:02:13.012 ==> default: Running provisioner: file... 00:02:13.946 default: ~/.gitconfig => .gitconfig 00:02:14.205 00:02:14.205 SUCCESS! 00:02:14.205 00:02:14.205 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:02:14.205 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:14.205 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:02:14.205 00:02:14.215 [Pipeline] } 00:02:14.235 [Pipeline] // stage 00:02:14.246 [Pipeline] dir 00:02:14.246 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora38-libvirt 00:02:14.248 [Pipeline] { 00:02:14.264 [Pipeline] catchError 00:02:14.266 [Pipeline] { 00:02:14.282 [Pipeline] sh 00:02:14.562 + vagrant ssh-config --host vagrant 00:02:14.562 + sed -ne /^Host/,$p 00:02:14.562 + tee ssh_conf 00:02:18.756 Host vagrant 00:02:18.756 HostName 192.168.121.59 00:02:18.756 User vagrant 00:02:18.756 Port 22 00:02:18.756 UserKnownHostsFile /dev/null 00:02:18.756 StrictHostKeyChecking no 00:02:18.756 PasswordAuthentication no 00:02:18.756 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:18.756 IdentitiesOnly yes 00:02:18.756 LogLevel FATAL 00:02:18.756 ForwardAgent yes 00:02:18.756 ForwardX11 yes 00:02:18.756 00:02:18.769 [Pipeline] withEnv 00:02:18.771 [Pipeline] { 00:02:18.785 [Pipeline] sh 00:02:19.061 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:19.061 source /etc/os-release 00:02:19.061 [[ -e /image.version ]] && img=$(< /image.version) 00:02:19.061 # Minimal, systemd-like check. 00:02:19.061 if [[ -e /.dockerenv ]]; then 00:02:19.061 # Clear garbage from the node's name: 00:02:19.061 # agt-er_autotest_547-896 -> autotest_547-896 00:02:19.061 # $HOSTNAME is the actual container id 00:02:19.061 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:19.061 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:19.061 # We can assume this is a mount from a host where container is running, 00:02:19.061 # so fetch its hostname to easily identify the target swarm worker. 00:02:19.061 container="$(< /etc/hostname) ($agent)" 00:02:19.061 else 00:02:19.061 # Fallback 00:02:19.061 container=$agent 00:02:19.061 fi 00:02:19.061 fi 00:02:19.061 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:19.061 00:02:19.330 [Pipeline] } 00:02:19.348 [Pipeline] // withEnv 00:02:19.356 [Pipeline] setCustomBuildProperty 00:02:19.371 [Pipeline] stage 00:02:19.374 [Pipeline] { (Tests) 00:02:19.395 [Pipeline] sh 00:02:19.678 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.691 [Pipeline] sh 00:02:19.965 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.991 [Pipeline] timeout 00:02:19.991 Timeout set to expire in 30 min 00:02:19.997 [Pipeline] { 00:02:20.018 [Pipeline] sh 00:02:20.291 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.895 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:02:20.909 [Pipeline] sh 00:02:21.190 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:21.462 [Pipeline] sh 00:02:21.742 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.759 [Pipeline] sh 00:02:22.039 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:22.039 ++ readlink -f spdk_repo 00:02:22.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.039 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.039 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:22.039 + cd /home/vagrant/spdk_repo 00:02:22.039 + source /etc/os-release 00:02:22.039 ++ NAME='Fedora Linux' 00:02:22.039 ++ VERSION='38 (Cloud Edition)' 00:02:22.039 ++ ID=fedora 00:02:22.039 ++ VERSION_ID=38 00:02:22.039 ++ VERSION_CODENAME= 00:02:22.039 ++ PLATFORM_ID=platform:f38 00:02:22.039 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:22.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.039 ++ LOGO=fedora-logo-icon 00:02:22.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:22.039 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:22.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:22.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:22.039 ++ SUPPORT_END=2024-05-14 00:02:22.039 ++ VARIANT='Cloud Edition' 00:02:22.039 ++ VARIANT_ID=cloud 00:02:22.039 + uname -a 00:02:22.039 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:22.039 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.297 Hugepages 00:02:22.297 node hugesize free / total 00:02:22.297 node0 1048576kB 0 / 0 00:02:22.297 node0 2048kB 0 / 0 00:02:22.297 00:02:22.297 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.297 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.297 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.297 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.297 + rm -f /tmp/spdk-ld-path 00:02:22.297 + source autorun-spdk.conf 00:02:22.297 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.297 ++ SPDK_TEST_NVMF=1 00:02:22.297 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.297 ++ SPDK_TEST_URING=1 00:02:22.297 ++ SPDK_TEST_USDT=1 00:02:22.297 ++ SPDK_RUN_UBSAN=1 00:02:22.297 ++ NET_TYPE=virt 00:02:22.297 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.297 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.297 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.297 ++ RUN_NIGHTLY=1 00:02:22.297 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.297 + [[ -n '' ]] 00:02:22.297 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.297 + for M in /var/spdk/build-*-manifest.txt 00:02:22.297 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.297 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.556 + for M in /var/spdk/build-*-manifest.txt 00:02:22.556 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.556 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.556 ++ uname 00:02:22.556 + [[ Linux == \L\i\n\u\x ]] 00:02:22.556 + sudo dmesg -T 00:02:22.556 + sudo dmesg --clear 00:02:22.556 + dmesg_pid=5822 00:02:22.556 + [[ Fedora Linux == FreeBSD ]] 00:02:22.556 + sudo dmesg -Tw 00:02:22.556 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.556 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.556 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.556 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.556 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.556 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.556 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.556 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.556 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.556 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.556 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.556 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.556 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.556 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.556 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.556 Test configuration: 00:02:22.556 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.556 SPDK_TEST_NVMF=1 00:02:22.556 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.556 SPDK_TEST_URING=1 00:02:22.556 SPDK_TEST_USDT=1 00:02:22.556 SPDK_RUN_UBSAN=1 00:02:22.556 NET_TYPE=virt 00:02:22.556 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.556 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.556 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.556 RUN_NIGHTLY=1 10:07:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:22.556 10:07:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:22.556 10:07:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.556 10:07:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.556 10:07:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.557 10:07:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.557 10:07:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.557 10:07:35 -- paths/export.sh@5 -- $ export PATH 00:02:22.557 10:07:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.557 10:07:35 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:22.557 10:07:35 -- common/autobuild_common.sh@438 -- $ date +%s 00:02:22.557 10:07:35 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721988455.XXXXXX 00:02:22.557 10:07:35 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721988455.A6IUwA 00:02:22.557 10:07:35 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:02:22.557 10:07:35 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:02:22.557 10:07:35 -- common/autobuild_common.sh@445 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.557 10:07:35 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:22.557 10:07:35 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:22.557 10:07:35 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:22.557 10:07:35 -- common/autobuild_common.sh@454 -- $ get_config_params 00:02:22.557 10:07:35 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:22.557 10:07:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.557 10:07:35 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:22.557 10:07:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.557 10:07:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.557 10:07:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.557 10:07:35 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.557 Fri Jul 26 10:07:35 AM UTC 2024 00:02:22.557 10:07:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.557 LTS-60-gdbef7efac 00:02:22.557 10:07:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:22.557 10:07:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.557 10:07:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.557 10:07:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:22.557 10:07:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:22.557 10:07:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.557 ************************************ 00:02:22.557 START TEST ubsan 00:02:22.557 ************************************ 00:02:22.557 using ubsan 00:02:22.557 10:07:35 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:22.557 00:02:22.557 real 0m0.000s 00:02:22.557 user 0m0.000s 00:02:22.557 sys 0m0.000s 00:02:22.557 10:07:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:22.557 10:07:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.557 ************************************ 00:02:22.557 END TEST ubsan 00:02:22.557 ************************************ 00:02:22.816 10:07:36 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:22.816 10:07:36 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:22.816 10:07:36 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:22.816 10:07:36 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:22.816 10:07:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.816 ************************************ 00:02:22.816 START TEST build_native_dpdk 00:02:22.816 ************************************ 00:02:22.816 10:07:36 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:22.816 10:07:36 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:22.816 10:07:36 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:22.816 10:07:36 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:22.816 10:07:36 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:22.816 10:07:36 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:22.816 10:07:36 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:22.816 10:07:36 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:22.816 10:07:36 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:22.816 10:07:36 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:22.816 10:07:36 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:22.816 10:07:36 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:22.816 10:07:36 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.816 10:07:36 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:22.816 caf0f5d395 version: 22.11.4 00:02:22.816 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:22.816 dc9c799c7d vhost: fix missing spinlock unlock 00:02:22.816 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:22.816 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:22.816 10:07:36 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:22.816 10:07:36 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:22.816 10:07:36 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:22.816 10:07:36 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:22.816 10:07:36 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:22.816 10:07:36 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:22.816 10:07:36 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:22.816 10:07:36 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:22.816 10:07:36 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:22.816 10:07:36 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:22.816 10:07:36 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:22.816 10:07:36 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:22.816 10:07:36 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:22.816 10:07:36 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:22.816 10:07:36 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:22.816 10:07:36 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:22.816 10:07:36 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:22.816 10:07:36 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.816 10:07:36 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:22.816 10:07:36 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:22.816 10:07:36 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:22.816 10:07:36 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:22.816 10:07:36 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:22.816 10:07:36 -- scripts/common.sh@343 -- $ case "$op" in 00:02:22.816 10:07:36 -- scripts/common.sh@344 -- $ : 1 00:02:22.816 10:07:36 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:22.816 10:07:36 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.816 10:07:36 -- scripts/common.sh@364 -- $ decimal 22 00:02:22.816 10:07:36 -- scripts/common.sh@352 -- $ local d=22 00:02:22.816 10:07:36 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.816 10:07:36 -- scripts/common.sh@354 -- $ echo 22 00:02:22.816 10:07:36 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:22.816 10:07:36 -- scripts/common.sh@365 -- $ decimal 21 00:02:22.816 10:07:36 -- scripts/common.sh@352 -- $ local d=21 00:02:22.816 10:07:36 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:22.816 10:07:36 -- scripts/common.sh@354 -- $ echo 21 00:02:22.816 10:07:36 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:22.816 10:07:36 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:22.816 10:07:36 -- scripts/common.sh@366 -- $ return 1 00:02:22.816 10:07:36 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:22.816 patching file config/rte_config.h 00:02:22.816 Hunk #1 succeeded at 60 (offset 1 line). 00:02:22.816 10:07:36 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:22.816 10:07:36 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:22.816 10:07:36 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:22.816 10:07:36 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:22.816 10:07:36 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:22.816 10:07:36 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:22.816 10:07:36 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.816 10:07:36 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:22.816 10:07:36 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:22.816 10:07:36 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:22.816 10:07:36 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:22.816 10:07:36 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:22.816 10:07:36 -- scripts/common.sh@343 -- $ case "$op" in 00:02:22.816 10:07:36 -- scripts/common.sh@344 -- $ : 1 00:02:22.816 10:07:36 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:22.816 10:07:36 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.816 10:07:36 -- scripts/common.sh@364 -- $ decimal 22 00:02:22.816 10:07:36 -- scripts/common.sh@352 -- $ local d=22 00:02:22.816 10:07:36 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.816 10:07:36 -- scripts/common.sh@354 -- $ echo 22 00:02:22.816 10:07:36 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:22.816 10:07:36 -- scripts/common.sh@365 -- $ decimal 24 00:02:22.816 10:07:36 -- scripts/common.sh@352 -- $ local d=24 00:02:22.816 10:07:36 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:22.816 10:07:36 -- scripts/common.sh@354 -- $ echo 24 00:02:22.816 10:07:36 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:22.816 10:07:36 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:22.816 10:07:36 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:22.816 10:07:36 -- scripts/common.sh@367 -- $ return 0 00:02:22.816 10:07:36 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:22.816 patching file lib/pcapng/rte_pcapng.c 00:02:22.816 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:22.816 10:07:36 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:22.816 10:07:36 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:22.816 10:07:36 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:22.816 10:07:36 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:22.816 10:07:36 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:28.085 The Meson build system 00:02:28.085 Version: 1.3.1 00:02:28.085 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:28.085 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:28.085 Build type: native build 00:02:28.085 Program cat found: YES (/usr/bin/cat) 00:02:28.085 Project name: DPDK 00:02:28.085 Project version: 22.11.4 00:02:28.085 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.085 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:28.085 Host machine cpu family: x86_64 00:02:28.085 Host machine cpu: x86_64 00:02:28.085 Message: ## Building in Developer Mode ## 00:02:28.085 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.085 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:28.085 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.085 Program objdump found: YES (/usr/bin/objdump) 00:02:28.085 Program python3 found: YES (/usr/bin/python3) 00:02:28.085 Program cat found: YES (/usr/bin/cat) 00:02:28.085 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:28.085 Checking for size of "void *" : 8 00:02:28.085 Checking for size of "void *" : 8 (cached) 00:02:28.085 Library m found: YES 00:02:28.085 Library numa found: YES 00:02:28.085 Has header "numaif.h" : YES 00:02:28.085 Library fdt found: NO 00:02:28.085 Library execinfo found: NO 00:02:28.085 Has header "execinfo.h" : YES 00:02:28.085 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.085 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.085 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.085 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.085 Run-time dependency openssl found: YES 3.0.9 00:02:28.085 Run-time dependency libpcap found: YES 1.10.4 00:02:28.085 Has header "pcap.h" with dependency libpcap: YES 00:02:28.085 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.085 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.085 Compiler for C supports arguments -Wformat: YES 00:02:28.085 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.085 Compiler for C supports arguments -Wformat-security: NO 00:02:28.085 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.085 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.085 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.085 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.085 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.085 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.085 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.085 Compiler for C supports arguments -Wundef: YES 00:02:28.085 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.085 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.085 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.085 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.085 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.085 Compiler for C supports arguments -mavx512f: YES 00:02:28.085 Checking if "AVX512 checking" compiles: YES 00:02:28.085 Fetching value of define "__SSE4_2__" : 1 00:02:28.085 Fetching value of define "__AES__" : 1 00:02:28.085 Fetching value of define "__AVX__" : 1 00:02:28.085 Fetching value of define "__AVX2__" : 1 00:02:28.085 Fetching value of define "__AVX512BW__" : (undefined) 00:02:28.085 Fetching value of define "__AVX512CD__" : (undefined) 00:02:28.085 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:28.085 Fetching value of define "__AVX512F__" : (undefined) 00:02:28.085 Fetching value of define "__AVX512VL__" : (undefined) 00:02:28.085 Fetching value of define "__PCLMUL__" : 1 00:02:28.085 Fetching value of define "__RDRND__" : 1 00:02:28.085 Fetching value of define "__RDSEED__" : 1 00:02:28.085 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.085 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.085 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.085 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.085 Checking for function "getentropy" : YES 00:02:28.085 Message: lib/eal: Defining dependency "eal" 00:02:28.085 Message: lib/ring: Defining dependency "ring" 00:02:28.085 Message: lib/rcu: Defining dependency "rcu" 00:02:28.085 Message: lib/mempool: Defining dependency "mempool" 00:02:28.085 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.085 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.085 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.085 Compiler for C supports arguments -mpclmul: YES 00:02:28.085 Compiler for C supports arguments -maes: YES 00:02:28.085 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.085 Compiler for C supports arguments -mavx512bw: YES 00:02:28.085 Compiler for C supports arguments -mavx512dq: YES 00:02:28.085 Compiler for C supports arguments -mavx512vl: YES 00:02:28.085 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.085 Compiler for C supports arguments -mavx2: YES 00:02:28.085 Compiler for C supports arguments -mavx: YES 00:02:28.085 Message: lib/net: Defining dependency "net" 00:02:28.085 Message: lib/meter: Defining dependency "meter" 00:02:28.085 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.085 Message: lib/pci: Defining dependency "pci" 00:02:28.085 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.086 Message: lib/metrics: Defining dependency "metrics" 00:02:28.086 Message: lib/hash: Defining dependency "hash" 00:02:28.086 Message: lib/timer: Defining dependency "timer" 00:02:28.086 Fetching value of define "__AVX2__" : 1 (cached) 00:02:28.086 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:28.086 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:28.086 Message: lib/acl: Defining dependency "acl" 00:02:28.086 Message: lib/bbdev: Defining dependency "bbdev" 00:02:28.086 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:28.086 Run-time dependency libelf found: YES 0.190 00:02:28.086 Message: lib/bpf: Defining dependency "bpf" 00:02:28.086 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:28.086 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.086 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.086 Message: lib/distributor: Defining dependency "distributor" 00:02:28.086 Message: lib/efd: Defining dependency "efd" 00:02:28.086 Message: lib/eventdev: Defining dependency "eventdev" 00:02:28.086 Message: lib/gpudev: Defining dependency "gpudev" 00:02:28.086 Message: lib/gro: Defining dependency "gro" 00:02:28.086 Message: lib/gso: Defining dependency "gso" 00:02:28.086 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:28.086 Message: lib/jobstats: Defining dependency "jobstats" 00:02:28.086 Message: lib/latencystats: Defining dependency "latencystats" 00:02:28.086 Message: lib/lpm: Defining dependency "lpm" 00:02:28.086 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:28.086 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:28.086 Message: lib/member: Defining dependency "member" 00:02:28.086 Message: lib/pcapng: Defining dependency "pcapng" 00:02:28.086 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.086 Message: lib/power: Defining dependency "power" 00:02:28.086 Message: lib/rawdev: Defining dependency "rawdev" 00:02:28.086 Message: lib/regexdev: Defining dependency "regexdev" 00:02:28.086 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.086 Message: lib/rib: Defining dependency "rib" 00:02:28.086 Message: lib/reorder: Defining dependency "reorder" 00:02:28.086 Message: lib/sched: Defining dependency "sched" 00:02:28.086 Message: lib/security: Defining dependency "security" 00:02:28.086 Message: lib/stack: Defining dependency "stack" 00:02:28.086 Has header "linux/userfaultfd.h" : YES 00:02:28.086 Message: lib/vhost: Defining dependency "vhost" 00:02:28.086 Message: lib/ipsec: Defining dependency "ipsec" 00:02:28.086 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.086 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:28.086 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:28.086 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:28.086 Message: lib/fib: Defining dependency "fib" 00:02:28.086 Message: lib/port: Defining dependency "port" 00:02:28.086 Message: lib/pdump: Defining dependency "pdump" 00:02:28.086 Message: lib/table: Defining dependency "table" 00:02:28.086 Message: lib/pipeline: Defining dependency "pipeline" 00:02:28.086 Message: lib/graph: Defining dependency "graph" 00:02:28.086 Message: lib/node: Defining dependency "node" 00:02:28.086 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.086 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.086 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.086 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.086 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:28.086 Compiler for C supports arguments -Wno-unused-value: YES 00:02:28.086 Compiler for C supports arguments -Wno-format: YES 00:02:28.086 Compiler for C supports arguments -Wno-format-security: YES 00:02:28.086 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:29.460 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:29.460 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:29.460 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:29.460 Fetching value of define "__AVX2__" : 1 (cached) 00:02:29.460 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.460 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.460 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:29.461 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:29.461 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:29.461 Program doxygen found: YES (/usr/bin/doxygen) 00:02:29.461 Configuring doxy-api.conf using configuration 00:02:29.461 Program sphinx-build found: NO 00:02:29.461 Configuring rte_build_config.h using configuration 00:02:29.461 Message: 00:02:29.461 ================= 00:02:29.461 Applications Enabled 00:02:29.461 ================= 00:02:29.461 00:02:29.461 apps: 00:02:29.461 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:29.461 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:29.461 test-security-perf, 00:02:29.461 00:02:29.461 Message: 00:02:29.461 ================= 00:02:29.461 Libraries Enabled 00:02:29.461 ================= 00:02:29.461 00:02:29.461 libs: 00:02:29.461 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:29.461 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:29.461 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:29.461 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:29.461 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:29.461 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:29.461 table, pipeline, graph, node, 00:02:29.461 00:02:29.461 Message: 00:02:29.461 =============== 00:02:29.461 Drivers Enabled 00:02:29.461 =============== 00:02:29.461 00:02:29.461 common: 00:02:29.461 00:02:29.461 bus: 00:02:29.461 pci, vdev, 00:02:29.461 mempool: 00:02:29.461 ring, 00:02:29.461 dma: 00:02:29.461 00:02:29.461 net: 00:02:29.461 i40e, 00:02:29.461 raw: 00:02:29.461 00:02:29.461 crypto: 00:02:29.461 00:02:29.461 compress: 00:02:29.461 00:02:29.461 regex: 00:02:29.461 00:02:29.461 vdpa: 00:02:29.461 00:02:29.461 event: 00:02:29.461 00:02:29.461 baseband: 00:02:29.461 00:02:29.461 gpu: 00:02:29.461 00:02:29.461 00:02:29.461 Message: 00:02:29.461 ================= 00:02:29.461 Content Skipped 00:02:29.461 ================= 00:02:29.461 00:02:29.461 apps: 00:02:29.461 00:02:29.461 libs: 00:02:29.461 kni: explicitly disabled via build config (deprecated lib) 00:02:29.461 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:29.461 00:02:29.461 drivers: 00:02:29.461 common/cpt: not in enabled drivers build config 00:02:29.461 common/dpaax: not in enabled drivers build config 00:02:29.461 common/iavf: not in enabled drivers build config 00:02:29.461 common/idpf: not in enabled drivers build config 00:02:29.461 common/mvep: not in enabled drivers build config 00:02:29.461 common/octeontx: not in enabled drivers build config 00:02:29.461 bus/auxiliary: not in enabled drivers build config 00:02:29.461 bus/dpaa: not in enabled drivers build config 00:02:29.461 bus/fslmc: not in enabled drivers build config 00:02:29.461 bus/ifpga: not in enabled drivers build config 00:02:29.461 bus/vmbus: not in enabled drivers build config 00:02:29.461 common/cnxk: not in enabled drivers build config 00:02:29.461 common/mlx5: not in enabled drivers build config 00:02:29.461 common/qat: not in enabled drivers build config 00:02:29.461 common/sfc_efx: not in enabled drivers build config 00:02:29.461 mempool/bucket: not in enabled drivers build config 00:02:29.461 mempool/cnxk: not in enabled drivers build config 00:02:29.461 mempool/dpaa: not in enabled drivers build config 00:02:29.461 mempool/dpaa2: not in enabled drivers build config 00:02:29.461 mempool/octeontx: not in enabled drivers build config 00:02:29.461 mempool/stack: not in enabled drivers build config 00:02:29.461 dma/cnxk: not in enabled drivers build config 00:02:29.461 dma/dpaa: not in enabled drivers build config 00:02:29.461 dma/dpaa2: not in enabled drivers build config 00:02:29.461 dma/hisilicon: not in enabled drivers build config 00:02:29.461 dma/idxd: not in enabled drivers build config 00:02:29.461 dma/ioat: not in enabled drivers build config 00:02:29.461 dma/skeleton: not in enabled drivers build config 00:02:29.461 net/af_packet: not in enabled drivers build config 00:02:29.461 net/af_xdp: not in enabled drivers build config 00:02:29.461 net/ark: not in enabled drivers build config 00:02:29.461 net/atlantic: not in enabled drivers build config 00:02:29.461 net/avp: not in enabled drivers build config 00:02:29.461 net/axgbe: not in enabled drivers build config 00:02:29.461 net/bnx2x: not in enabled drivers build config 00:02:29.461 net/bnxt: not in enabled drivers build config 00:02:29.461 net/bonding: not in enabled drivers build config 00:02:29.461 net/cnxk: not in enabled drivers build config 00:02:29.461 net/cxgbe: not in enabled drivers build config 00:02:29.461 net/dpaa: not in enabled drivers build config 00:02:29.461 net/dpaa2: not in enabled drivers build config 00:02:29.461 net/e1000: not in enabled drivers build config 00:02:29.461 net/ena: not in enabled drivers build config 00:02:29.461 net/enetc: not in enabled drivers build config 00:02:29.461 net/enetfec: not in enabled drivers build config 00:02:29.461 net/enic: not in enabled drivers build config 00:02:29.461 net/failsafe: not in enabled drivers build config 00:02:29.461 net/fm10k: not in enabled drivers build config 00:02:29.461 net/gve: not in enabled drivers build config 00:02:29.461 net/hinic: not in enabled drivers build config 00:02:29.461 net/hns3: not in enabled drivers build config 00:02:29.461 net/iavf: not in enabled drivers build config 00:02:29.461 net/ice: not in enabled drivers build config 00:02:29.461 net/idpf: not in enabled drivers build config 00:02:29.461 net/igc: not in enabled drivers build config 00:02:29.461 net/ionic: not in enabled drivers build config 00:02:29.461 net/ipn3ke: not in enabled drivers build config 00:02:29.461 net/ixgbe: not in enabled drivers build config 00:02:29.461 net/kni: not in enabled drivers build config 00:02:29.461 net/liquidio: not in enabled drivers build config 00:02:29.461 net/mana: not in enabled drivers build config 00:02:29.461 net/memif: not in enabled drivers build config 00:02:29.461 net/mlx4: not in enabled drivers build config 00:02:29.461 net/mlx5: not in enabled drivers build config 00:02:29.461 net/mvneta: not in enabled drivers build config 00:02:29.461 net/mvpp2: not in enabled drivers build config 00:02:29.461 net/netvsc: not in enabled drivers build config 00:02:29.461 net/nfb: not in enabled drivers build config 00:02:29.461 net/nfp: not in enabled drivers build config 00:02:29.461 net/ngbe: not in enabled drivers build config 00:02:29.461 net/null: not in enabled drivers build config 00:02:29.461 net/octeontx: not in enabled drivers build config 00:02:29.461 net/octeon_ep: not in enabled drivers build config 00:02:29.461 net/pcap: not in enabled drivers build config 00:02:29.461 net/pfe: not in enabled drivers build config 00:02:29.461 net/qede: not in enabled drivers build config 00:02:29.461 net/ring: not in enabled drivers build config 00:02:29.461 net/sfc: not in enabled drivers build config 00:02:29.461 net/softnic: not in enabled drivers build config 00:02:29.461 net/tap: not in enabled drivers build config 00:02:29.461 net/thunderx: not in enabled drivers build config 00:02:29.461 net/txgbe: not in enabled drivers build config 00:02:29.461 net/vdev_netvsc: not in enabled drivers build config 00:02:29.461 net/vhost: not in enabled drivers build config 00:02:29.461 net/virtio: not in enabled drivers build config 00:02:29.461 net/vmxnet3: not in enabled drivers build config 00:02:29.461 raw/cnxk_bphy: not in enabled drivers build config 00:02:29.461 raw/cnxk_gpio: not in enabled drivers build config 00:02:29.461 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:29.461 raw/ifpga: not in enabled drivers build config 00:02:29.461 raw/ntb: not in enabled drivers build config 00:02:29.461 raw/skeleton: not in enabled drivers build config 00:02:29.461 crypto/armv8: not in enabled drivers build config 00:02:29.461 crypto/bcmfs: not in enabled drivers build config 00:02:29.461 crypto/caam_jr: not in enabled drivers build config 00:02:29.461 crypto/ccp: not in enabled drivers build config 00:02:29.461 crypto/cnxk: not in enabled drivers build config 00:02:29.461 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.461 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.461 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.461 crypto/mlx5: not in enabled drivers build config 00:02:29.461 crypto/mvsam: not in enabled drivers build config 00:02:29.461 crypto/nitrox: not in enabled drivers build config 00:02:29.461 crypto/null: not in enabled drivers build config 00:02:29.461 crypto/octeontx: not in enabled drivers build config 00:02:29.461 crypto/openssl: not in enabled drivers build config 00:02:29.461 crypto/scheduler: not in enabled drivers build config 00:02:29.461 crypto/uadk: not in enabled drivers build config 00:02:29.461 crypto/virtio: not in enabled drivers build config 00:02:29.461 compress/isal: not in enabled drivers build config 00:02:29.461 compress/mlx5: not in enabled drivers build config 00:02:29.461 compress/octeontx: not in enabled drivers build config 00:02:29.461 compress/zlib: not in enabled drivers build config 00:02:29.461 regex/mlx5: not in enabled drivers build config 00:02:29.461 regex/cn9k: not in enabled drivers build config 00:02:29.461 vdpa/ifc: not in enabled drivers build config 00:02:29.461 vdpa/mlx5: not in enabled drivers build config 00:02:29.461 vdpa/sfc: not in enabled drivers build config 00:02:29.461 event/cnxk: not in enabled drivers build config 00:02:29.461 event/dlb2: not in enabled drivers build config 00:02:29.461 event/dpaa: not in enabled drivers build config 00:02:29.461 event/dpaa2: not in enabled drivers build config 00:02:29.462 event/dsw: not in enabled drivers build config 00:02:29.462 event/opdl: not in enabled drivers build config 00:02:29.462 event/skeleton: not in enabled drivers build config 00:02:29.462 event/sw: not in enabled drivers build config 00:02:29.462 event/octeontx: not in enabled drivers build config 00:02:29.462 baseband/acc: not in enabled drivers build config 00:02:29.462 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:29.462 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:29.462 baseband/la12xx: not in enabled drivers build config 00:02:29.462 baseband/null: not in enabled drivers build config 00:02:29.462 baseband/turbo_sw: not in enabled drivers build config 00:02:29.462 gpu/cuda: not in enabled drivers build config 00:02:29.462 00:02:29.462 00:02:29.462 Build targets in project: 314 00:02:29.462 00:02:29.462 DPDK 22.11.4 00:02:29.462 00:02:29.462 User defined options 00:02:29.462 libdir : lib 00:02:29.462 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:29.462 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:29.462 c_link_args : 00:02:29.462 enable_docs : false 00:02:29.462 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:29.462 enable_kmods : false 00:02:29.462 machine : native 00:02:29.462 tests : false 00:02:29.462 00:02:29.462 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.462 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:29.462 10:07:42 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:29.462 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.728 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:29.728 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:29.728 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:29.728 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:29.728 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.728 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.728 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.728 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.728 [9/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.728 [10/743] Linking static target lib/librte_kvargs.a 00:02:29.728 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.728 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.728 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.728 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.728 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:29.996 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.996 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:29.996 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.996 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:29.996 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.996 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:29.996 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:29.996 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:29.996 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.996 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:29.996 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:29.996 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.256 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.256 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.256 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.256 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.256 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.256 [33/743] Linking static target lib/librte_telemetry.a 00:02:30.256 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.256 [35/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:30.256 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.256 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.256 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.256 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.256 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.256 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.515 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.515 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.515 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.515 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.515 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:30.515 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.774 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.774 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.774 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.774 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.774 [52/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:30.774 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.774 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.774 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.774 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.774 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.774 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.774 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.774 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:30.774 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.774 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:30.774 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.032 [64/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:31.032 [65/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.032 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.032 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.032 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.032 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.032 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.032 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.032 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.032 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.032 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.032 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.032 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.291 [77/743] Generating lib/rte_eal_def with a custom command 00:02:31.291 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:31.291 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.291 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.291 [81/743] Generating lib/rte_ring_def with a custom command 00:02:31.291 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:31.291 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:31.291 [84/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.291 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.292 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:31.292 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.292 [88/743] Linking static target lib/librte_ring.a 00:02:31.292 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.292 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:31.292 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:31.549 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.550 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.550 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.808 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.808 [96/743] Linking static target lib/librte_eal.a 00:02:31.808 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.808 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:31.808 [99/743] Generating lib/rte_mbuf_def with a custom command 00:02:31.808 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:31.808 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.066 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.066 [103/743] Linking static target lib/librte_rcu.a 00:02:32.066 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.066 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.325 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.325 [107/743] Linking static target lib/librte_mempool.a 00:02:32.325 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.325 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.325 [110/743] Generating lib/rte_net_def with a custom command 00:02:32.325 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:32.583 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.583 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.583 [114/743] Generating lib/rte_meter_def with a custom command 00:02:32.583 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:32.583 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.583 [117/743] Linking static target lib/librte_meter.a 00:02:32.583 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.583 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.842 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.842 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.842 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.842 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:32.842 [124/743] Linking static target lib/librte_mbuf.a 00:02:33.100 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.100 [126/743] Linking static target lib/librte_net.a 00:02:33.100 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.359 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.359 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.359 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.359 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.359 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.359 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.617 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.617 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.185 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.185 [137/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.185 [138/743] Generating lib/rte_ethdev_def with a custom command 00:02:34.185 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:34.185 [140/743] Generating lib/rte_pci_def with a custom command 00:02:34.185 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.185 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.185 [143/743] Generating lib/rte_pci_mingw with a custom command 00:02:34.185 [144/743] Linking static target lib/librte_pci.a 00:02:34.185 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.185 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.185 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.444 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.444 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.444 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.444 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.444 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.444 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.444 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.444 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.444 [156/743] Generating lib/rte_cmdline_def with a custom command 00:02:34.444 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.702 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:34.702 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.702 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.702 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:34.702 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:34.702 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:34.702 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.702 [165/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.702 [166/743] Generating lib/rte_hash_def with a custom command 00:02:34.702 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:34.702 [168/743] Generating lib/rte_timer_def with a custom command 00:02:34.702 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.961 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:34.961 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.961 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.961 [173/743] Linking static target lib/librte_cmdline.a 00:02:35.220 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:35.220 [175/743] Linking static target lib/librte_metrics.a 00:02:35.220 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.220 [177/743] Linking static target lib/librte_timer.a 00:02:35.479 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.479 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.737 [180/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:35.737 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.737 [182/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.737 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.737 [184/743] Linking static target lib/librte_ethdev.a 00:02:36.302 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:36.302 [186/743] Generating lib/rte_acl_def with a custom command 00:02:36.302 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:36.302 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:36.302 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:36.302 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:36.561 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:36.561 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:36.561 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:36.820 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:37.079 [195/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:37.079 [196/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:37.079 [197/743] Linking static target lib/librte_bitratestats.a 00:02:37.336 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.336 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:37.336 [200/743] Linking static target lib/librte_bbdev.a 00:02:37.336 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:37.595 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:37.595 [203/743] Linking static target lib/librte_hash.a 00:02:37.854 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:37.854 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:37.854 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:37.854 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.854 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:38.114 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:38.372 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.372 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:38.372 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:38.372 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:38.372 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:38.372 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:38.372 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:38.631 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:38.631 [218/743] Linking static target lib/librte_acl.a 00:02:38.631 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:38.631 [220/743] Linking static target lib/librte_cfgfile.a 00:02:38.631 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:38.889 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:38.889 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:38.889 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:38.889 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.889 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.889 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.147 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:39.147 [229/743] Linking target lib/librte_eal.so.23.0 00:02:39.147 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.147 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:39.147 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:39.148 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:39.148 [234/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.148 [235/743] Linking target lib/librte_ring.so.23.0 00:02:39.148 [236/743] Linking target lib/librte_meter.so.23.0 00:02:39.406 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:39.406 [238/743] Linking target lib/librte_pci.so.23.0 00:02:39.406 [239/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:39.406 [240/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:39.406 [241/743] Linking target lib/librte_rcu.so.23.0 00:02:39.406 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.406 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:39.406 [244/743] Linking target lib/librte_timer.so.23.0 00:02:39.406 [245/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:39.406 [246/743] Linking target lib/librte_acl.so.23.0 00:02:39.665 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:39.665 [248/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.665 [249/743] Linking static target lib/librte_bpf.a 00:02:39.665 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:39.665 [251/743] Linking target lib/librte_cfgfile.so.23.0 00:02:39.665 [252/743] Linking static target lib/librte_compressdev.a 00:02:39.665 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:39.665 [254/743] Linking target lib/librte_mbuf.so.23.0 00:02:39.665 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:39.665 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:39.665 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:39.665 [258/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.665 [259/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:39.665 [260/743] Generating lib/rte_efd_def with a custom command 00:02:39.926 [261/743] Linking target lib/librte_bbdev.so.23.0 00:02:39.926 [262/743] Linking target lib/librte_net.so.23.0 00:02:39.926 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:39.926 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:39.926 [265/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.926 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:39.926 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:39.926 [268/743] Linking target lib/librte_hash.so.23.0 00:02:40.185 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:40.185 [270/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:40.185 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:40.185 [272/743] Linking static target lib/librte_distributor.a 00:02:40.443 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.443 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.443 [275/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.443 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:40.443 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:40.443 [278/743] Linking target lib/librte_distributor.so.23.0 00:02:40.443 [279/743] Linking target lib/librte_compressdev.so.23.0 00:02:40.701 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:40.702 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:40.702 [282/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:40.702 [283/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:40.702 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:40.702 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:40.702 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:40.959 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:40.959 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:40.959 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:40.959 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:40.959 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:41.279 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:41.279 [293/743] Linking static target lib/librte_efd.a 00:02:41.279 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:41.537 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.537 [296/743] Linking static target lib/librte_cryptodev.a 00:02:41.537 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.537 [298/743] Linking target lib/librte_efd.so.23.0 00:02:41.537 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:41.796 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:41.796 [301/743] Generating lib/rte_gro_def with a custom command 00:02:41.796 [302/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:41.796 [303/743] Linking static target lib/librte_gpudev.a 00:02:41.796 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:41.796 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:41.796 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:42.055 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:42.314 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:42.314 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:42.573 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:42.573 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:42.573 [312/743] Generating lib/rte_gso_def with a custom command 00:02:42.573 [313/743] Linking static target lib/librte_gro.a 00:02:42.573 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:42.573 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:42.573 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.573 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:42.573 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:42.831 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.831 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:42.831 [321/743] Linking target lib/librte_gro.so.23.0 00:02:42.831 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:42.831 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:42.831 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:43.090 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:43.090 [326/743] Linking static target lib/librte_eventdev.a 00:02:43.090 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:43.090 [328/743] Linking static target lib/librte_jobstats.a 00:02:43.090 [329/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:43.090 [330/743] Linking static target lib/librte_gso.a 00:02:43.090 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:43.090 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:43.348 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.348 [334/743] Linking target lib/librte_gso.so.23.0 00:02:43.348 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:43.348 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:43.348 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:43.348 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:43.348 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:43.348 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.348 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:43.348 [342/743] Linking target lib/librte_jobstats.so.23.0 00:02:43.348 [343/743] Generating lib/rte_lpm_def with a custom command 00:02:43.606 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:43.606 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.606 [346/743] Linking target lib/librte_cryptodev.so.23.0 00:02:43.606 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:43.606 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:43.606 [349/743] Linking static target lib/librte_ip_frag.a 00:02:43.606 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:43.863 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.863 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:44.120 [353/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:44.121 [354/743] Generating lib/rte_member_def with a custom command 00:02:44.121 [355/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:44.121 [356/743] Linking static target lib/librte_latencystats.a 00:02:44.121 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:44.121 [358/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:44.121 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:44.121 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:44.121 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:44.121 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:44.379 [363/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.379 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:44.379 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.379 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:44.379 [367/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:44.379 [368/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.379 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.670 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.670 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:44.949 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:44.949 [373/743] Generating lib/rte_power_def with a custom command 00:02:44.949 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:44.949 [375/743] Linking static target lib/librte_lpm.a 00:02:44.949 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:44.949 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.949 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.949 [379/743] Linking target lib/librte_eventdev.so.23.0 00:02:44.949 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:44.949 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:45.208 [382/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:45.208 [383/743] Linking static target lib/librte_pcapng.a 00:02:45.208 [384/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:45.208 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.208 [386/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.208 [387/743] Generating lib/rte_regexdev_def with a custom command 00:02:45.208 [388/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:45.208 [389/743] Linking target lib/librte_lpm.so.23.0 00:02:45.208 [390/743] Generating lib/rte_dmadev_def with a custom command 00:02:45.208 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:45.208 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:45.208 [393/743] Linking static target lib/librte_rawdev.a 00:02:45.208 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:45.208 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.466 [396/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:45.466 [397/743] Generating lib/rte_rib_def with a custom command 00:02:45.466 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:45.466 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:45.466 [400/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.466 [401/743] Generating lib/rte_reorder_mingw with a custom command 00:02:45.466 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:45.466 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.724 [404/743] Linking static target lib/librte_power.a 00:02:45.724 [405/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:45.724 [406/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.724 [407/743] Linking static target lib/librte_dmadev.a 00:02:45.724 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.724 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:45.724 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:45.724 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:45.982 [412/743] Generating lib/rte_sched_def with a custom command 00:02:45.982 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:45.982 [414/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:45.982 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:45.982 [416/743] Linking static target lib/librte_regexdev.a 00:02:45.982 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:45.982 [418/743] Linking static target lib/librte_member.a 00:02:45.982 [419/743] Generating lib/rte_security_def with a custom command 00:02:45.982 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:45.982 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:46.240 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:46.240 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.240 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:46.240 [425/743] Linking target lib/librte_dmadev.so.23.0 00:02:46.241 [426/743] Generating lib/rte_stack_def with a custom command 00:02:46.241 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:46.241 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:46.241 [429/743] Linking static target lib/librte_reorder.a 00:02:46.241 [430/743] Linking static target lib/librte_stack.a 00:02:46.241 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:46.241 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.241 [433/743] Linking target lib/librte_member.so.23.0 00:02:46.241 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:46.499 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.499 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.499 [437/743] Linking target lib/librte_stack.so.23.0 00:02:46.499 [438/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.499 [439/743] Linking target lib/librte_reorder.so.23.0 00:02:46.499 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.499 [441/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:46.499 [442/743] Linking static target lib/librte_rib.a 00:02:46.499 [443/743] Linking target lib/librte_power.so.23.0 00:02:46.757 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.757 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:46.757 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:46.757 [447/743] Linking static target lib/librte_security.a 00:02:47.015 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.015 [449/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.015 [450/743] Linking target lib/librte_rib.so.23.0 00:02:47.015 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:47.280 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:47.280 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.280 [454/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.280 [455/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:47.280 [456/743] Linking target lib/librte_security.so.23.0 00:02:47.538 [457/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:47.538 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:47.538 [459/743] Linking static target lib/librte_sched.a 00:02:47.538 [460/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.106 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.106 [462/743] Linking target lib/librte_sched.so.23.0 00:02:48.106 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.106 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:48.106 [465/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:48.106 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:48.106 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:48.364 [468/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.364 [469/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:48.364 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:48.364 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:48.929 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:48.929 [473/743] Generating lib/rte_fib_def with a custom command 00:02:48.929 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:48.930 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:48.930 [476/743] Generating lib/rte_fib_mingw with a custom command 00:02:48.930 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:48.930 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:48.930 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:48.930 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:49.188 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:49.188 [482/743] Linking static target lib/librte_ipsec.a 00:02:49.445 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.445 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:49.704 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:49.704 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:49.704 [487/743] Linking static target lib/librte_fib.a 00:02:49.704 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:49.962 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:49.962 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:49.962 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:49.962 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.962 [493/743] Linking target lib/librte_fib.so.23.0 00:02:50.219 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:50.785 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:50.785 [496/743] Generating lib/rte_port_def with a custom command 00:02:50.785 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:50.785 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:50.785 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:50.785 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:50.785 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:50.785 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:51.043 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:51.043 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:51.044 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:51.044 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:51.301 [507/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:51.301 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:51.301 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:51.301 [510/743] Linking static target lib/librte_port.a 00:02:51.560 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:51.560 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:51.818 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.818 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:51.818 [515/743] Linking target lib/librte_port.so.23.0 00:02:51.818 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:51.818 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:52.077 [518/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:52.077 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:52.077 [520/743] Linking static target lib/librte_pdump.a 00:02:52.335 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.335 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:52.335 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:52.335 [524/743] Generating lib/rte_table_def with a custom command 00:02:52.594 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:52.594 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:52.594 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:52.853 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:52.853 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:52.853 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:53.111 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:53.111 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:53.111 [533/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:53.111 [534/743] Linking static target lib/librte_table.a 00:02:53.111 [535/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:53.370 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:53.370 [537/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.628 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:53.887 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.887 [540/743] Linking target lib/librte_table.so.23.0 00:02:53.887 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:53.887 [542/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:54.146 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:54.146 [544/743] Generating lib/rte_graph_def with a custom command 00:02:54.146 [545/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:54.146 [546/743] Generating lib/rte_graph_mingw with a custom command 00:02:54.146 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:54.146 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:54.717 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:54.717 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:54.717 [551/743] Linking static target lib/librte_graph.a 00:02:54.717 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:54.974 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:54.974 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:54.974 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:55.233 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:55.233 [557/743] Generating lib/rte_node_def with a custom command 00:02:55.233 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:55.233 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:55.491 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.491 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.491 [562/743] Linking target lib/librte_graph.so.23.0 00:02:55.491 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:55.491 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.749 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:55.749 [566/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:55.749 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:55.749 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.749 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:55.749 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.749 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:55.749 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:55.749 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.749 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:56.007 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:56.007 [576/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.007 [577/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.007 [578/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:56.007 [579/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:56.007 [580/743] Linking static target lib/librte_node.a 00:02:56.007 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.265 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.265 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.265 [584/743] Linking static target drivers/librte_bus_vdev.a 00:02:56.265 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.265 [586/743] Linking target lib/librte_node.so.23.0 00:02:56.265 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.265 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.265 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.524 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.524 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:56.524 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.524 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.524 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:56.524 [595/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:56.524 [596/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.782 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.041 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:57.041 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:57.041 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.041 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.041 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:57.299 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.299 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.299 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:57.299 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.299 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.299 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:57.299 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.563 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:57.821 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.079 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.337 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.337 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.596 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.163 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.163 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.163 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.421 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.679 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:59.679 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:59.679 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:59.679 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:59.937 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:59.937 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.872 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:01.130 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:01.130 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:01.130 [629/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:01.130 [630/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:01.390 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:01.390 [632/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:01.390 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:01.390 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.648 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:01.907 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:02.165 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:02.165 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:02.165 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:02.165 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.165 [641/743] Linking static target lib/librte_vhost.a 00:03:02.423 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:02.423 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:02.423 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:02.423 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.682 [646/743] Linking static target drivers/librte_net_i40e.a 00:03:02.682 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:02.682 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.940 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:02.940 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:03.199 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:03.199 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.199 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:03.459 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:03.459 [655/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.459 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.459 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.717 [658/743] Linking target lib/librte_vhost.so.23.0 00:03:03.718 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:04.284 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.284 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:04.284 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:04.284 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:04.284 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:04.543 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.543 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:04.543 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:04.543 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:04.802 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:04.802 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:05.370 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:05.370 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:05.370 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.629 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.888 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:06.147 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:06.147 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:06.406 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:06.406 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:06.406 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:06.406 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:06.665 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:06.923 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:06.923 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:06.923 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:07.182 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:07.182 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:07.182 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:07.442 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:07.442 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:07.442 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:07.442 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.442 [693/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:07.702 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:08.268 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:08.268 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:08.268 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:08.526 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:08.526 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:09.092 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:09.092 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:09.092 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:09.362 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:09.362 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:09.633 [705/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:09.633 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:09.633 [707/743] Linking static target lib/librte_pipeline.a 00:03:09.633 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:09.890 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:10.147 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:10.147 [711/743] Linking target app/dpdk-dumpcap 00:03:10.147 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:10.147 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:10.405 [714/743] Linking target app/dpdk-pdump 00:03:10.665 [715/743] Linking target app/dpdk-proc-info 00:03:10.665 [716/743] Linking target app/dpdk-test-acl 00:03:10.665 [717/743] Linking target app/dpdk-test-bbdev 00:03:10.665 [718/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:10.665 [719/743] Linking target app/dpdk-test-cmdline 00:03:10.923 [720/743] Linking target app/dpdk-test-compress-perf 00:03:10.923 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.923 [722/743] Linking target app/dpdk-test-crypto-perf 00:03:11.180 [723/743] Linking target app/dpdk-test-fib 00:03:11.180 [724/743] Linking target app/dpdk-test-eventdev 00:03:11.180 [725/743] Linking target app/dpdk-test-gpudev 00:03:11.180 [726/743] Linking target app/dpdk-test-flow-perf 00:03:11.438 [727/743] Linking target app/dpdk-test-pipeline 00:03:11.438 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:11.696 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:11.696 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:11.954 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:11.954 [732/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:11.954 [733/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.212 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.212 [735/743] Linking target lib/librte_pipeline.so.23.0 00:03:12.212 [736/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.212 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.470 [738/743] Linking target app/dpdk-test-sad 00:03:12.470 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.728 [740/743] Linking target app/dpdk-test-regex 00:03:12.728 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:12.986 [742/743] Linking target app/dpdk-testpmd 00:03:13.244 [743/743] Linking target app/dpdk-test-security-perf 00:03:13.244 10:08:26 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.244 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.502 [0/1] Installing files. 00:03:13.763 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:13.763 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.766 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.767 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.768 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.769 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.769 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.769 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.769 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.042 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.043 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.043 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.043 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.043 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.043 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.043 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.044 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.334 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.335 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.336 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.336 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:14.336 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.336 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:14.336 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.336 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:14.336 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.336 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:14.336 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.336 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:14.336 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.336 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:14.336 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.336 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:14.336 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.336 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:14.336 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.336 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:14.336 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.336 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:14.336 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.336 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:14.336 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.336 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:14.336 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.336 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:14.336 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.336 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:14.336 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.336 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:14.336 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.336 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:14.336 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.336 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:14.336 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.336 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:14.336 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.336 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:14.336 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.336 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:14.336 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.336 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:14.336 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.336 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:14.336 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.336 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:14.336 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.336 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:14.336 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.336 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:14.336 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.336 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:14.336 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:14.336 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:14.336 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:14.336 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:14.336 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:14.336 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:14.336 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:14.336 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:14.336 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:14.336 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:14.337 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:14.337 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:14.337 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.337 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:14.337 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.337 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:14.337 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.337 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:14.337 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.337 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:14.337 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.337 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:14.337 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.337 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:14.337 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.337 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:14.337 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.337 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:14.337 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.337 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:14.337 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.337 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:14.337 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.337 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:14.337 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.337 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:14.337 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.337 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:14.337 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.337 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:14.337 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.337 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:14.337 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.337 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:14.337 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.337 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:14.337 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.337 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:14.337 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.337 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:14.337 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.337 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:14.337 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.337 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:14.337 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.337 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:14.337 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.337 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:14.337 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.337 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:14.337 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.337 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:14.337 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.337 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:14.337 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.337 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:14.337 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:14.337 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:14.337 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:14.337 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:14.337 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:14.337 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:14.337 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:14.337 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:14.337 10:08:27 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:14.337 10:08:27 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:14.337 10:08:27 -- common/autobuild_common.sh@203 -- $ cat 00:03:14.337 ************************************ 00:03:14.337 END TEST build_native_dpdk 00:03:14.337 ************************************ 00:03:14.337 10:08:27 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.337 00:03:14.337 real 0m51.579s 00:03:14.337 user 6m6.075s 00:03:14.337 sys 0m59.636s 00:03:14.337 10:08:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:14.337 10:08:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.337 10:08:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.337 10:08:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.337 10:08:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:14.598 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.598 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.598 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:14.598 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.857 Using 'verbs' RDMA provider 00:03:30.672 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:42.879 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:42.879 Creating mk/config.mk...done. 00:03:42.879 Creating mk/cc.flags.mk...done. 00:03:42.879 Type 'make' to build. 00:03:42.879 10:08:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:42.879 10:08:55 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:42.879 10:08:55 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:42.879 10:08:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.879 ************************************ 00:03:42.879 START TEST make 00:03:42.879 ************************************ 00:03:42.879 10:08:55 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:42.879 make[1]: Nothing to be done for 'all'. 00:04:04.807 CC lib/log/log.o 00:04:04.807 CC lib/log/log_flags.o 00:04:04.807 CC lib/ut/ut.o 00:04:04.807 CC lib/ut_mock/mock.o 00:04:04.807 CC lib/log/log_deprecated.o 00:04:04.807 LIB libspdk_ut_mock.a 00:04:04.807 SO libspdk_ut_mock.so.5.0 00:04:04.807 LIB libspdk_ut.a 00:04:04.807 SO libspdk_ut.so.1.0 00:04:04.807 LIB libspdk_log.a 00:04:04.807 SYMLINK libspdk_ut_mock.so 00:04:04.807 SO libspdk_log.so.6.1 00:04:04.807 SYMLINK libspdk_ut.so 00:04:04.807 SYMLINK libspdk_log.so 00:04:04.807 CC lib/dma/dma.o 00:04:04.807 CC lib/util/base64.o 00:04:04.807 CXX lib/trace_parser/trace.o 00:04:04.807 CC lib/util/bit_array.o 00:04:04.807 CC lib/util/cpuset.o 00:04:04.807 CC lib/util/crc16.o 00:04:04.807 CC lib/util/crc32.o 00:04:04.807 CC lib/util/crc32c.o 00:04:04.807 CC lib/ioat/ioat.o 00:04:04.807 CC lib/vfio_user/host/vfio_user_pci.o 00:04:05.066 CC lib/util/crc32_ieee.o 00:04:05.066 CC lib/vfio_user/host/vfio_user.o 00:04:05.066 CC lib/util/crc64.o 00:04:05.066 LIB libspdk_dma.a 00:04:05.066 CC lib/util/dif.o 00:04:05.066 CC lib/util/fd.o 00:04:05.066 SO libspdk_dma.so.3.0 00:04:05.066 CC lib/util/file.o 00:04:05.066 CC lib/util/hexlify.o 00:04:05.066 SYMLINK libspdk_dma.so 00:04:05.066 CC lib/util/iov.o 00:04:05.066 LIB libspdk_ioat.a 00:04:05.066 CC lib/util/math.o 00:04:05.066 SO libspdk_ioat.so.6.0 00:04:05.324 LIB libspdk_vfio_user.a 00:04:05.324 CC lib/util/pipe.o 00:04:05.324 SYMLINK libspdk_ioat.so 00:04:05.324 CC lib/util/strerror_tls.o 00:04:05.324 CC lib/util/string.o 00:04:05.324 CC lib/util/uuid.o 00:04:05.324 SO libspdk_vfio_user.so.4.0 00:04:05.324 CC lib/util/fd_group.o 00:04:05.324 SYMLINK libspdk_vfio_user.so 00:04:05.324 CC lib/util/xor.o 00:04:05.324 CC lib/util/zipf.o 00:04:05.583 LIB libspdk_util.a 00:04:05.583 SO libspdk_util.so.8.0 00:04:05.842 LIB libspdk_trace_parser.a 00:04:05.842 SO libspdk_trace_parser.so.4.0 00:04:05.842 SYMLINK libspdk_util.so 00:04:05.842 SYMLINK libspdk_trace_parser.so 00:04:06.101 CC lib/conf/conf.o 00:04:06.101 CC lib/json/json_parse.o 00:04:06.101 CC lib/json/json_util.o 00:04:06.101 CC lib/json/json_write.o 00:04:06.101 CC lib/rdma/common.o 00:04:06.101 CC lib/env_dpdk/env.o 00:04:06.101 CC lib/env_dpdk/memory.o 00:04:06.101 CC lib/env_dpdk/pci.o 00:04:06.101 CC lib/vmd/vmd.o 00:04:06.101 CC lib/idxd/idxd.o 00:04:06.101 CC lib/rdma/rdma_verbs.o 00:04:06.101 CC lib/idxd/idxd_user.o 00:04:06.360 LIB libspdk_conf.a 00:04:06.360 CC lib/idxd/idxd_kernel.o 00:04:06.360 LIB libspdk_json.a 00:04:06.360 SO libspdk_conf.so.5.0 00:04:06.360 SO libspdk_json.so.5.1 00:04:06.360 CC lib/env_dpdk/init.o 00:04:06.360 CC lib/env_dpdk/threads.o 00:04:06.360 SYMLINK libspdk_conf.so 00:04:06.360 CC lib/env_dpdk/pci_ioat.o 00:04:06.360 LIB libspdk_rdma.a 00:04:06.360 SYMLINK libspdk_json.so 00:04:06.360 CC lib/env_dpdk/pci_virtio.o 00:04:06.360 CC lib/env_dpdk/pci_vmd.o 00:04:06.360 SO libspdk_rdma.so.5.0 00:04:06.619 CC lib/env_dpdk/pci_idxd.o 00:04:06.619 SYMLINK libspdk_rdma.so 00:04:06.619 LIB libspdk_idxd.a 00:04:06.619 CC lib/vmd/led.o 00:04:06.619 CC lib/env_dpdk/pci_event.o 00:04:06.619 SO libspdk_idxd.so.11.0 00:04:06.619 CC lib/env_dpdk/sigbus_handler.o 00:04:06.619 CC lib/env_dpdk/pci_dpdk.o 00:04:06.619 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.619 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.619 SYMLINK libspdk_idxd.so 00:04:06.619 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.619 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.619 LIB libspdk_vmd.a 00:04:06.619 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.619 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.619 SO libspdk_vmd.so.5.0 00:04:06.886 SYMLINK libspdk_vmd.so 00:04:06.886 LIB libspdk_jsonrpc.a 00:04:06.886 SO libspdk_jsonrpc.so.5.1 00:04:07.145 SYMLINK libspdk_jsonrpc.so 00:04:07.145 CC lib/rpc/rpc.o 00:04:07.404 LIB libspdk_env_dpdk.a 00:04:07.404 LIB libspdk_rpc.a 00:04:07.404 SO libspdk_env_dpdk.so.13.0 00:04:07.404 SO libspdk_rpc.so.5.0 00:04:07.663 SYMLINK libspdk_rpc.so 00:04:07.663 SYMLINK libspdk_env_dpdk.so 00:04:07.663 CC lib/notify/notify.o 00:04:07.663 CC lib/notify/notify_rpc.o 00:04:07.663 CC lib/trace/trace_flags.o 00:04:07.663 CC lib/trace/trace.o 00:04:07.663 CC lib/trace/trace_rpc.o 00:04:07.663 CC lib/sock/sock.o 00:04:07.663 CC lib/sock/sock_rpc.o 00:04:07.922 LIB libspdk_notify.a 00:04:07.922 LIB libspdk_trace.a 00:04:07.922 SO libspdk_notify.so.5.0 00:04:07.922 SO libspdk_trace.so.9.0 00:04:07.922 SYMLINK libspdk_notify.so 00:04:08.181 SYMLINK libspdk_trace.so 00:04:08.181 LIB libspdk_sock.a 00:04:08.181 SO libspdk_sock.so.8.0 00:04:08.181 SYMLINK libspdk_sock.so 00:04:08.181 CC lib/thread/thread.o 00:04:08.181 CC lib/thread/iobuf.o 00:04:08.440 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:08.440 CC lib/nvme/nvme_ctrlr.o 00:04:08.440 CC lib/nvme/nvme_fabric.o 00:04:08.440 CC lib/nvme/nvme_ns_cmd.o 00:04:08.440 CC lib/nvme/nvme_ns.o 00:04:08.440 CC lib/nvme/nvme_pcie_common.o 00:04:08.440 CC lib/nvme/nvme_pcie.o 00:04:08.440 CC lib/nvme/nvme_qpair.o 00:04:08.698 CC lib/nvme/nvme.o 00:04:09.266 CC lib/nvme/nvme_quirks.o 00:04:09.266 CC lib/nvme/nvme_transport.o 00:04:09.266 CC lib/nvme/nvme_discovery.o 00:04:09.266 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:09.266 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:09.266 CC lib/nvme/nvme_tcp.o 00:04:09.525 CC lib/nvme/nvme_opal.o 00:04:09.525 CC lib/nvme/nvme_io_msg.o 00:04:09.783 CC lib/nvme/nvme_poll_group.o 00:04:09.783 LIB libspdk_thread.a 00:04:09.783 CC lib/nvme/nvme_zns.o 00:04:09.783 SO libspdk_thread.so.9.0 00:04:09.783 CC lib/nvme/nvme_cuse.o 00:04:09.783 CC lib/nvme/nvme_vfio_user.o 00:04:09.783 SYMLINK libspdk_thread.so 00:04:10.041 CC lib/nvme/nvme_rdma.o 00:04:10.041 CC lib/accel/accel.o 00:04:10.041 CC lib/blob/blobstore.o 00:04:10.299 CC lib/init/json_config.o 00:04:10.299 CC lib/init/subsystem.o 00:04:10.556 CC lib/init/subsystem_rpc.o 00:04:10.556 CC lib/init/rpc.o 00:04:10.556 CC lib/accel/accel_rpc.o 00:04:10.556 CC lib/accel/accel_sw.o 00:04:10.556 CC lib/blob/request.o 00:04:10.556 LIB libspdk_init.a 00:04:10.814 SO libspdk_init.so.4.0 00:04:10.814 CC lib/blob/zeroes.o 00:04:10.814 CC lib/blob/blob_bs_dev.o 00:04:10.814 CC lib/virtio/virtio.o 00:04:10.814 SYMLINK libspdk_init.so 00:04:10.814 CC lib/virtio/virtio_vhost_user.o 00:04:10.814 CC lib/virtio/virtio_vfio_user.o 00:04:10.814 CC lib/event/app.o 00:04:10.814 CC lib/event/reactor.o 00:04:10.814 CC lib/event/log_rpc.o 00:04:11.072 CC lib/event/app_rpc.o 00:04:11.072 LIB libspdk_accel.a 00:04:11.072 CC lib/event/scheduler_static.o 00:04:11.072 CC lib/virtio/virtio_pci.o 00:04:11.072 SO libspdk_accel.so.14.0 00:04:11.072 SYMLINK libspdk_accel.so 00:04:11.330 LIB libspdk_event.a 00:04:11.330 LIB libspdk_nvme.a 00:04:11.330 CC lib/bdev/bdev_rpc.o 00:04:11.330 CC lib/bdev/bdev.o 00:04:11.330 CC lib/bdev/bdev_zone.o 00:04:11.330 CC lib/bdev/part.o 00:04:11.330 CC lib/bdev/scsi_nvme.o 00:04:11.330 SO libspdk_event.so.12.0 00:04:11.330 LIB libspdk_virtio.a 00:04:11.330 SO libspdk_virtio.so.6.0 00:04:11.330 SYMLINK libspdk_event.so 00:04:11.588 SO libspdk_nvme.so.12.0 00:04:11.588 SYMLINK libspdk_virtio.so 00:04:11.863 SYMLINK libspdk_nvme.so 00:04:12.807 LIB libspdk_blob.a 00:04:13.066 SO libspdk_blob.so.10.1 00:04:13.066 SYMLINK libspdk_blob.so 00:04:13.324 CC lib/lvol/lvol.o 00:04:13.324 CC lib/blobfs/blobfs.o 00:04:13.324 CC lib/blobfs/tree.o 00:04:13.892 LIB libspdk_bdev.a 00:04:13.892 SO libspdk_bdev.so.14.0 00:04:14.150 LIB libspdk_blobfs.a 00:04:14.150 SO libspdk_blobfs.so.9.0 00:04:14.150 SYMLINK libspdk_bdev.so 00:04:14.150 LIB libspdk_lvol.a 00:04:14.150 SYMLINK libspdk_blobfs.so 00:04:14.150 SO libspdk_lvol.so.9.1 00:04:14.150 SYMLINK libspdk_lvol.so 00:04:14.150 CC lib/scsi/dev.o 00:04:14.150 CC lib/scsi/lun.o 00:04:14.150 CC lib/scsi/port.o 00:04:14.150 CC lib/scsi/scsi.o 00:04:14.150 CC lib/scsi/scsi_bdev.o 00:04:14.150 CC lib/scsi/scsi_pr.o 00:04:14.150 CC lib/ublk/ublk.o 00:04:14.150 CC lib/nvmf/ctrlr.o 00:04:14.150 CC lib/ftl/ftl_core.o 00:04:14.150 CC lib/nbd/nbd.o 00:04:14.409 CC lib/scsi/scsi_rpc.o 00:04:14.409 CC lib/ftl/ftl_init.o 00:04:14.409 CC lib/ftl/ftl_layout.o 00:04:14.668 CC lib/ftl/ftl_debug.o 00:04:14.668 CC lib/ftl/ftl_io.o 00:04:14.668 CC lib/ftl/ftl_sb.o 00:04:14.668 CC lib/ftl/ftl_l2p.o 00:04:14.668 CC lib/nbd/nbd_rpc.o 00:04:14.668 CC lib/ublk/ublk_rpc.o 00:04:14.926 CC lib/ftl/ftl_l2p_flat.o 00:04:14.926 CC lib/ftl/ftl_nv_cache.o 00:04:14.926 CC lib/ftl/ftl_band.o 00:04:14.926 LIB libspdk_nbd.a 00:04:14.926 CC lib/scsi/task.o 00:04:14.926 CC lib/ftl/ftl_band_ops.o 00:04:14.926 SO libspdk_nbd.so.6.0 00:04:14.926 CC lib/ftl/ftl_writer.o 00:04:14.926 LIB libspdk_ublk.a 00:04:14.926 SO libspdk_ublk.so.2.0 00:04:14.926 SYMLINK libspdk_nbd.so 00:04:14.926 CC lib/ftl/ftl_rq.o 00:04:14.926 CC lib/nvmf/ctrlr_discovery.o 00:04:14.926 SYMLINK libspdk_ublk.so 00:04:14.926 CC lib/ftl/ftl_reloc.o 00:04:14.926 CC lib/ftl/ftl_l2p_cache.o 00:04:14.926 LIB libspdk_scsi.a 00:04:15.184 SO libspdk_scsi.so.8.0 00:04:15.184 CC lib/nvmf/ctrlr_bdev.o 00:04:15.184 CC lib/ftl/ftl_p2l.o 00:04:15.184 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.184 CC lib/nvmf/subsystem.o 00:04:15.184 SYMLINK libspdk_scsi.so 00:04:15.184 CC lib/nvmf/nvmf.o 00:04:15.442 CC lib/nvmf/nvmf_rpc.o 00:04:15.442 CC lib/nvmf/transport.o 00:04:15.442 CC lib/nvmf/tcp.o 00:04:15.442 CC lib/nvmf/rdma.o 00:04:15.700 CC lib/iscsi/conn.o 00:04:15.700 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:15.700 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:15.958 CC lib/iscsi/init_grp.o 00:04:15.958 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.958 CC lib/vhost/vhost.o 00:04:15.958 CC lib/vhost/vhost_rpc.o 00:04:15.958 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:16.216 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.216 CC lib/iscsi/iscsi.o 00:04:16.216 CC lib/vhost/vhost_scsi.o 00:04:16.216 CC lib/vhost/vhost_blk.o 00:04:16.475 CC lib/vhost/rte_vhost_user.o 00:04:16.475 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:16.475 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:16.475 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.475 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.734 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:16.734 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:16.734 CC lib/iscsi/md5.o 00:04:16.993 CC lib/iscsi/param.o 00:04:16.993 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:16.993 CC lib/iscsi/portal_grp.o 00:04:16.993 CC lib/ftl/utils/ftl_conf.o 00:04:16.993 CC lib/ftl/utils/ftl_md.o 00:04:16.993 CC lib/iscsi/tgt_node.o 00:04:16.993 CC lib/iscsi/iscsi_subsystem.o 00:04:17.251 CC lib/iscsi/iscsi_rpc.o 00:04:17.251 CC lib/iscsi/task.o 00:04:17.251 CC lib/ftl/utils/ftl_mempool.o 00:04:17.251 CC lib/ftl/utils/ftl_bitmap.o 00:04:17.251 CC lib/ftl/utils/ftl_property.o 00:04:17.510 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:17.510 LIB libspdk_nvmf.a 00:04:17.510 LIB libspdk_vhost.a 00:04:17.510 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:17.510 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:17.510 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:17.510 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:17.510 LIB libspdk_iscsi.a 00:04:17.510 SO libspdk_vhost.so.7.1 00:04:17.510 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:17.510 SO libspdk_nvmf.so.17.0 00:04:17.769 SO libspdk_iscsi.so.7.0 00:04:17.769 SYMLINK libspdk_vhost.so 00:04:17.769 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:17.769 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:17.769 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:17.769 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:17.769 CC lib/ftl/base/ftl_base_dev.o 00:04:17.769 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.769 CC lib/ftl/ftl_trace.o 00:04:17.769 SYMLINK libspdk_nvmf.so 00:04:17.769 SYMLINK libspdk_iscsi.so 00:04:18.028 LIB libspdk_ftl.a 00:04:18.287 SO libspdk_ftl.so.8.0 00:04:18.544 SYMLINK libspdk_ftl.so 00:04:18.802 CC module/env_dpdk/env_dpdk_rpc.o 00:04:19.060 CC module/accel/dsa/accel_dsa.o 00:04:19.060 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:19.060 CC module/blob/bdev/blob_bdev.o 00:04:19.060 CC module/accel/error/accel_error.o 00:04:19.060 CC module/scheduler/gscheduler/gscheduler.o 00:04:19.060 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:19.060 CC module/sock/posix/posix.o 00:04:19.060 CC module/accel/iaa/accel_iaa.o 00:04:19.060 CC module/accel/ioat/accel_ioat.o 00:04:19.060 LIB libspdk_env_dpdk_rpc.a 00:04:19.060 SO libspdk_env_dpdk_rpc.so.5.0 00:04:19.060 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.060 SYMLINK libspdk_env_dpdk_rpc.so 00:04:19.060 LIB libspdk_scheduler_gscheduler.a 00:04:19.060 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.060 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:19.061 CC module/accel/error/accel_error_rpc.o 00:04:19.061 LIB libspdk_scheduler_dynamic.a 00:04:19.061 SO libspdk_scheduler_gscheduler.so.3.0 00:04:19.061 SO libspdk_scheduler_dynamic.so.3.0 00:04:19.319 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.319 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.319 LIB libspdk_blob_bdev.a 00:04:19.319 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.319 SYMLINK libspdk_scheduler_gscheduler.so 00:04:19.319 SYMLINK libspdk_scheduler_dynamic.so 00:04:19.319 SO libspdk_blob_bdev.so.10.1 00:04:19.319 CC module/sock/uring/uring.o 00:04:19.319 LIB libspdk_accel_iaa.a 00:04:19.319 LIB libspdk_accel_error.a 00:04:19.319 SYMLINK libspdk_blob_bdev.so 00:04:19.319 SO libspdk_accel_iaa.so.2.0 00:04:19.319 SO libspdk_accel_error.so.1.0 00:04:19.319 LIB libspdk_accel_ioat.a 00:04:19.319 LIB libspdk_accel_dsa.a 00:04:19.319 SYMLINK libspdk_accel_iaa.so 00:04:19.319 SO libspdk_accel_ioat.so.5.0 00:04:19.319 SYMLINK libspdk_accel_error.so 00:04:19.319 SO libspdk_accel_dsa.so.4.0 00:04:19.577 SYMLINK libspdk_accel_ioat.so 00:04:19.577 CC module/bdev/error/vbdev_error.o 00:04:19.577 CC module/bdev/gpt/gpt.o 00:04:19.577 CC module/bdev/lvol/vbdev_lvol.o 00:04:19.577 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.577 CC module/bdev/delay/vbdev_delay.o 00:04:19.577 SYMLINK libspdk_accel_dsa.so 00:04:19.577 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:19.577 CC module/bdev/malloc/bdev_malloc.o 00:04:19.577 CC module/bdev/null/bdev_null.o 00:04:19.577 LIB libspdk_sock_posix.a 00:04:19.577 SO libspdk_sock_posix.so.5.0 00:04:19.577 CC module/bdev/gpt/vbdev_gpt.o 00:04:19.836 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:19.836 LIB libspdk_blobfs_bdev.a 00:04:19.836 SYMLINK libspdk_sock_posix.so 00:04:19.836 CC module/bdev/error/vbdev_error_rpc.o 00:04:19.836 SO libspdk_blobfs_bdev.so.5.0 00:04:19.836 CC module/bdev/null/bdev_null_rpc.o 00:04:19.836 SYMLINK libspdk_blobfs_bdev.so 00:04:19.836 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:19.836 LIB libspdk_bdev_delay.a 00:04:19.836 LIB libspdk_bdev_error.a 00:04:19.836 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:19.836 SO libspdk_bdev_delay.so.5.0 00:04:20.094 LIB libspdk_sock_uring.a 00:04:20.094 LIB libspdk_bdev_gpt.a 00:04:20.094 LIB libspdk_bdev_null.a 00:04:20.094 SO libspdk_bdev_error.so.5.0 00:04:20.094 SO libspdk_sock_uring.so.4.0 00:04:20.094 SO libspdk_bdev_null.so.5.0 00:04:20.094 CC module/bdev/nvme/bdev_nvme.o 00:04:20.094 SO libspdk_bdev_gpt.so.5.0 00:04:20.094 SYMLINK libspdk_bdev_error.so 00:04:20.094 CC module/bdev/passthru/vbdev_passthru.o 00:04:20.094 SYMLINK libspdk_bdev_delay.so 00:04:20.094 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.094 SYMLINK libspdk_bdev_null.so 00:04:20.094 SYMLINK libspdk_bdev_gpt.so 00:04:20.094 SYMLINK libspdk_sock_uring.so 00:04:20.094 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:20.094 LIB libspdk_bdev_malloc.a 00:04:20.094 LIB libspdk_bdev_lvol.a 00:04:20.094 CC module/bdev/raid/bdev_raid.o 00:04:20.094 CC module/bdev/split/vbdev_split.o 00:04:20.094 SO libspdk_bdev_malloc.so.5.0 00:04:20.094 SO libspdk_bdev_lvol.so.5.0 00:04:20.094 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.094 CC module/bdev/uring/bdev_uring.o 00:04:20.352 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.352 SYMLINK libspdk_bdev_malloc.so 00:04:20.352 CC module/bdev/raid/bdev_raid_rpc.o 00:04:20.352 SYMLINK libspdk_bdev_lvol.so 00:04:20.352 CC module/bdev/raid/bdev_raid_sb.o 00:04:20.352 LIB libspdk_bdev_passthru.a 00:04:20.352 SO libspdk_bdev_passthru.so.5.0 00:04:20.352 LIB libspdk_bdev_split.a 00:04:20.352 CC module/bdev/raid/raid0.o 00:04:20.352 SO libspdk_bdev_split.so.5.0 00:04:20.352 SYMLINK libspdk_bdev_passthru.so 00:04:20.610 SYMLINK libspdk_bdev_split.so 00:04:20.610 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.610 CC module/bdev/uring/bdev_uring_rpc.o 00:04:20.610 CC module/bdev/raid/raid1.o 00:04:20.610 CC module/bdev/nvme/nvme_rpc.o 00:04:20.610 CC module/bdev/nvme/bdev_mdns_client.o 00:04:20.610 CC module/bdev/nvme/vbdev_opal.o 00:04:20.610 CC module/bdev/aio/bdev_aio.o 00:04:20.610 LIB libspdk_bdev_zone_block.a 00:04:20.610 LIB libspdk_bdev_uring.a 00:04:20.610 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.869 SO libspdk_bdev_zone_block.so.5.0 00:04:20.869 SO libspdk_bdev_uring.so.5.0 00:04:20.869 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.869 CC module/bdev/raid/concat.o 00:04:20.869 SYMLINK libspdk_bdev_zone_block.so 00:04:20.869 SYMLINK libspdk_bdev_uring.so 00:04:20.869 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.869 CC module/bdev/ftl/bdev_ftl.o 00:04:20.869 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.869 CC module/bdev/iscsi/bdev_iscsi.o 00:04:21.128 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:21.128 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:21.128 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:21.128 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:21.128 LIB libspdk_bdev_aio.a 00:04:21.128 LIB libspdk_bdev_raid.a 00:04:21.128 SO libspdk_bdev_aio.so.5.0 00:04:21.128 SO libspdk_bdev_raid.so.5.0 00:04:21.128 SYMLINK libspdk_bdev_aio.so 00:04:21.128 SYMLINK libspdk_bdev_raid.so 00:04:21.386 LIB libspdk_bdev_ftl.a 00:04:21.386 SO libspdk_bdev_ftl.so.5.0 00:04:21.386 LIB libspdk_bdev_iscsi.a 00:04:21.386 SO libspdk_bdev_iscsi.so.5.0 00:04:21.386 SYMLINK libspdk_bdev_ftl.so 00:04:21.386 SYMLINK libspdk_bdev_iscsi.so 00:04:21.645 LIB libspdk_bdev_virtio.a 00:04:21.645 SO libspdk_bdev_virtio.so.5.0 00:04:21.645 SYMLINK libspdk_bdev_virtio.so 00:04:22.213 LIB libspdk_bdev_nvme.a 00:04:22.213 SO libspdk_bdev_nvme.so.6.0 00:04:22.471 SYMLINK libspdk_bdev_nvme.so 00:04:22.757 CC module/event/subsystems/vmd/vmd.o 00:04:22.757 CC module/event/subsystems/scheduler/scheduler.o 00:04:22.757 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:22.757 CC module/event/subsystems/sock/sock.o 00:04:22.757 CC module/event/subsystems/iobuf/iobuf.o 00:04:22.757 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:22.757 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.016 LIB libspdk_event_scheduler.a 00:04:23.016 LIB libspdk_event_sock.a 00:04:23.016 LIB libspdk_event_vhost_blk.a 00:04:23.016 LIB libspdk_event_vmd.a 00:04:23.016 LIB libspdk_event_iobuf.a 00:04:23.016 SO libspdk_event_scheduler.so.3.0 00:04:23.016 SO libspdk_event_sock.so.4.0 00:04:23.016 SO libspdk_event_vhost_blk.so.2.0 00:04:23.016 SO libspdk_event_vmd.so.5.0 00:04:23.016 SO libspdk_event_iobuf.so.2.0 00:04:23.016 SYMLINK libspdk_event_scheduler.so 00:04:23.016 SYMLINK libspdk_event_sock.so 00:04:23.016 SYMLINK libspdk_event_vhost_blk.so 00:04:23.016 SYMLINK libspdk_event_vmd.so 00:04:23.016 SYMLINK libspdk_event_iobuf.so 00:04:23.275 CC module/event/subsystems/accel/accel.o 00:04:23.534 LIB libspdk_event_accel.a 00:04:23.534 SO libspdk_event_accel.so.5.0 00:04:23.534 SYMLINK libspdk_event_accel.so 00:04:23.792 CC module/event/subsystems/bdev/bdev.o 00:04:24.052 LIB libspdk_event_bdev.a 00:04:24.052 SO libspdk_event_bdev.so.5.0 00:04:24.052 SYMLINK libspdk_event_bdev.so 00:04:24.052 CC module/event/subsystems/nbd/nbd.o 00:04:24.052 CC module/event/subsystems/scsi/scsi.o 00:04:24.310 CC module/event/subsystems/ublk/ublk.o 00:04:24.310 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:24.310 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:24.310 LIB libspdk_event_nbd.a 00:04:24.310 LIB libspdk_event_scsi.a 00:04:24.310 SO libspdk_event_nbd.so.5.0 00:04:24.310 LIB libspdk_event_ublk.a 00:04:24.311 SO libspdk_event_scsi.so.5.0 00:04:24.311 SO libspdk_event_ublk.so.2.0 00:04:24.570 SYMLINK libspdk_event_nbd.so 00:04:24.570 LIB libspdk_event_nvmf.a 00:04:24.570 SYMLINK libspdk_event_scsi.so 00:04:24.570 SYMLINK libspdk_event_ublk.so 00:04:24.570 SO libspdk_event_nvmf.so.5.0 00:04:24.570 SYMLINK libspdk_event_nvmf.so 00:04:24.570 CC module/event/subsystems/iscsi/iscsi.o 00:04:24.570 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.829 LIB libspdk_event_vhost_scsi.a 00:04:24.829 LIB libspdk_event_iscsi.a 00:04:24.829 SO libspdk_event_vhost_scsi.so.2.0 00:04:24.829 SO libspdk_event_iscsi.so.5.0 00:04:24.829 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.088 SYMLINK libspdk_event_iscsi.so 00:04:25.088 SO libspdk.so.5.0 00:04:25.088 SYMLINK libspdk.so 00:04:25.347 CXX app/trace/trace.o 00:04:25.347 CC examples/ioat/perf/perf.o 00:04:25.347 CC examples/nvme/hello_world/hello_world.o 00:04:25.347 CC examples/sock/hello_world/hello_sock.o 00:04:25.347 CC examples/accel/perf/accel_perf.o 00:04:25.347 CC examples/vmd/lsvmd/lsvmd.o 00:04:25.347 CC examples/blob/hello_world/hello_blob.o 00:04:25.347 CC test/app/bdev_svc/bdev_svc.o 00:04:25.347 CC test/accel/dif/dif.o 00:04:25.347 CC examples/bdev/hello_world/hello_bdev.o 00:04:25.605 LINK lsvmd 00:04:25.605 LINK bdev_svc 00:04:25.605 LINK hello_world 00:04:25.605 LINK ioat_perf 00:04:25.605 LINK hello_sock 00:04:25.605 LINK hello_blob 00:04:25.605 LINK hello_bdev 00:04:25.865 LINK spdk_trace 00:04:25.865 CC examples/vmd/led/led.o 00:04:25.865 LINK dif 00:04:25.865 CC examples/ioat/verify/verify.o 00:04:25.865 CC examples/nvme/reconnect/reconnect.o 00:04:25.865 LINK accel_perf 00:04:25.865 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:25.865 LINK led 00:04:25.865 CC examples/bdev/bdevperf/bdevperf.o 00:04:25.865 CC examples/nvmf/nvmf/nvmf.o 00:04:25.865 CC examples/blob/cli/blobcli.o 00:04:25.865 CC app/trace_record/trace_record.o 00:04:26.125 LINK verify 00:04:26.125 CC test/app/histogram_perf/histogram_perf.o 00:04:26.125 CC examples/util/zipf/zipf.o 00:04:26.125 LINK histogram_perf 00:04:26.125 LINK reconnect 00:04:26.125 LINK nvmf 00:04:26.125 CC examples/thread/thread/thread_ex.o 00:04:26.383 LINK spdk_trace_record 00:04:26.383 LINK nvme_fuzz 00:04:26.383 CC examples/idxd/perf/perf.o 00:04:26.383 LINK zipf 00:04:26.383 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.383 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:26.383 LINK blobcli 00:04:26.383 CC app/nvmf_tgt/nvmf_main.o 00:04:26.383 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:26.383 CC test/app/jsoncat/jsoncat.o 00:04:26.642 LINK thread 00:04:26.642 LINK interrupt_tgt 00:04:26.642 CC test/bdev/bdevio/bdevio.o 00:04:26.642 LINK idxd_perf 00:04:26.642 LINK jsoncat 00:04:26.642 LINK nvmf_tgt 00:04:26.642 LINK bdevperf 00:04:26.642 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:26.901 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:26.901 CC examples/nvme/arbitration/arbitration.o 00:04:26.901 CC test/app/stub/stub.o 00:04:26.901 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.901 LINK nvme_manage 00:04:26.901 CC app/spdk_lspci/spdk_lspci.o 00:04:26.901 CC app/spdk_tgt/spdk_tgt.o 00:04:26.901 CC examples/nvme/hotplug/hotplug.o 00:04:26.901 LINK bdevio 00:04:27.159 LINK stub 00:04:27.159 LINK spdk_lspci 00:04:27.159 LINK arbitration 00:04:27.159 LINK iscsi_tgt 00:04:27.159 LINK vhost_fuzz 00:04:27.159 LINK spdk_tgt 00:04:27.159 LINK hotplug 00:04:27.159 CC test/blobfs/mkfs/mkfs.o 00:04:27.159 TEST_HEADER include/spdk/accel.h 00:04:27.159 TEST_HEADER include/spdk/accel_module.h 00:04:27.159 TEST_HEADER include/spdk/assert.h 00:04:27.159 TEST_HEADER include/spdk/barrier.h 00:04:27.159 TEST_HEADER include/spdk/base64.h 00:04:27.159 TEST_HEADER include/spdk/bdev.h 00:04:27.420 TEST_HEADER include/spdk/bdev_module.h 00:04:27.420 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.420 TEST_HEADER include/spdk/bit_array.h 00:04:27.420 TEST_HEADER include/spdk/bit_pool.h 00:04:27.420 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.420 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.420 TEST_HEADER include/spdk/blobfs.h 00:04:27.420 TEST_HEADER include/spdk/blob.h 00:04:27.420 TEST_HEADER include/spdk/conf.h 00:04:27.420 TEST_HEADER include/spdk/config.h 00:04:27.420 TEST_HEADER include/spdk/cpuset.h 00:04:27.420 TEST_HEADER include/spdk/crc16.h 00:04:27.420 TEST_HEADER include/spdk/crc32.h 00:04:27.420 TEST_HEADER include/spdk/crc64.h 00:04:27.420 TEST_HEADER include/spdk/dif.h 00:04:27.420 TEST_HEADER include/spdk/dma.h 00:04:27.420 TEST_HEADER include/spdk/endian.h 00:04:27.420 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.420 TEST_HEADER include/spdk/env.h 00:04:27.420 TEST_HEADER include/spdk/event.h 00:04:27.420 TEST_HEADER include/spdk/fd_group.h 00:04:27.420 TEST_HEADER include/spdk/fd.h 00:04:27.420 TEST_HEADER include/spdk/file.h 00:04:27.420 CC app/spdk_nvme_perf/perf.o 00:04:27.420 TEST_HEADER include/spdk/ftl.h 00:04:27.420 TEST_HEADER include/spdk/hexlify.h 00:04:27.420 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.420 CC app/spdk_nvme_identify/identify.o 00:04:27.420 TEST_HEADER include/spdk/histogram_data.h 00:04:27.420 TEST_HEADER include/spdk/idxd.h 00:04:27.420 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.420 TEST_HEADER include/spdk/init.h 00:04:27.420 TEST_HEADER include/spdk/ioat.h 00:04:27.420 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.420 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.420 TEST_HEADER include/spdk/json.h 00:04:27.420 CC test/dma/test_dma/test_dma.o 00:04:27.420 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.420 TEST_HEADER include/spdk/likely.h 00:04:27.420 TEST_HEADER include/spdk/log.h 00:04:27.420 TEST_HEADER include/spdk/lvol.h 00:04:27.420 TEST_HEADER include/spdk/memory.h 00:04:27.420 TEST_HEADER include/spdk/mmio.h 00:04:27.420 TEST_HEADER include/spdk/nbd.h 00:04:27.420 TEST_HEADER include/spdk/notify.h 00:04:27.420 TEST_HEADER include/spdk/nvme.h 00:04:27.420 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.420 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.420 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.420 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.420 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.420 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.420 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.420 CC app/spdk_nvme_discover/discovery_aer.o 00:04:27.420 TEST_HEADER include/spdk/nvmf.h 00:04:27.420 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.420 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.420 TEST_HEADER include/spdk/opal.h 00:04:27.420 TEST_HEADER include/spdk/opal_spec.h 00:04:27.420 TEST_HEADER include/spdk/pci_ids.h 00:04:27.420 TEST_HEADER include/spdk/pipe.h 00:04:27.420 TEST_HEADER include/spdk/queue.h 00:04:27.420 TEST_HEADER include/spdk/reduce.h 00:04:27.420 TEST_HEADER include/spdk/rpc.h 00:04:27.420 TEST_HEADER include/spdk/scheduler.h 00:04:27.420 TEST_HEADER include/spdk/scsi.h 00:04:27.420 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.420 TEST_HEADER include/spdk/sock.h 00:04:27.420 TEST_HEADER include/spdk/stdinc.h 00:04:27.420 TEST_HEADER include/spdk/string.h 00:04:27.420 TEST_HEADER include/spdk/thread.h 00:04:27.420 TEST_HEADER include/spdk/trace.h 00:04:27.420 TEST_HEADER include/spdk/trace_parser.h 00:04:27.420 TEST_HEADER include/spdk/tree.h 00:04:27.420 TEST_HEADER include/spdk/ublk.h 00:04:27.420 TEST_HEADER include/spdk/util.h 00:04:27.420 TEST_HEADER include/spdk/uuid.h 00:04:27.420 TEST_HEADER include/spdk/version.h 00:04:27.420 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.420 CC app/spdk_top/spdk_top.o 00:04:27.420 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:27.420 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.420 TEST_HEADER include/spdk/vhost.h 00:04:27.420 TEST_HEADER include/spdk/vmd.h 00:04:27.420 TEST_HEADER include/spdk/xor.h 00:04:27.420 TEST_HEADER include/spdk/zipf.h 00:04:27.420 CXX test/cpp_headers/accel.o 00:04:27.420 LINK mkfs 00:04:27.420 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.679 LINK spdk_nvme_discover 00:04:27.679 LINK cmb_copy 00:04:27.679 CXX test/cpp_headers/accel_module.o 00:04:27.679 LINK test_dma 00:04:27.679 LINK mem_callbacks 00:04:27.679 CC test/event/event_perf/event_perf.o 00:04:27.938 CC test/event/reactor/reactor.o 00:04:27.938 CXX test/cpp_headers/assert.o 00:04:27.938 CC examples/nvme/abort/abort.o 00:04:27.938 CC test/env/vtophys/vtophys.o 00:04:27.938 LINK event_perf 00:04:27.938 LINK reactor 00:04:27.938 CXX test/cpp_headers/barrier.o 00:04:27.938 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:27.938 LINK iscsi_fuzz 00:04:28.197 LINK spdk_nvme_identify 00:04:28.197 LINK vtophys 00:04:28.197 LINK env_dpdk_post_init 00:04:28.197 CC test/event/reactor_perf/reactor_perf.o 00:04:28.197 CXX test/cpp_headers/base64.o 00:04:28.197 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:28.197 LINK abort 00:04:28.197 LINK spdk_nvme_perf 00:04:28.197 CXX test/cpp_headers/bdev.o 00:04:28.197 CXX test/cpp_headers/bdev_module.o 00:04:28.197 LINK spdk_top 00:04:28.456 LINK reactor_perf 00:04:28.456 LINK pmr_persistence 00:04:28.456 CC app/vhost/vhost.o 00:04:28.456 CC app/spdk_dd/spdk_dd.o 00:04:28.456 CC test/env/memory/memory_ut.o 00:04:28.456 CC test/event/app_repeat/app_repeat.o 00:04:28.456 CXX test/cpp_headers/bdev_zone.o 00:04:28.456 CC test/rpc_client/rpc_client_test.o 00:04:28.456 CXX test/cpp_headers/bit_array.o 00:04:28.456 CC app/fio/nvme/fio_plugin.o 00:04:28.715 CC test/nvme/aer/aer.o 00:04:28.715 LINK app_repeat 00:04:28.715 LINK vhost 00:04:28.715 CC test/lvol/esnap/esnap.o 00:04:28.715 CXX test/cpp_headers/bit_pool.o 00:04:28.715 LINK rpc_client_test 00:04:28.715 CC app/fio/bdev/fio_plugin.o 00:04:28.715 LINK spdk_dd 00:04:28.715 CXX test/cpp_headers/blob_bdev.o 00:04:28.974 LINK aer 00:04:28.974 CC test/event/scheduler/scheduler.o 00:04:28.974 CC test/env/pci/pci_ut.o 00:04:28.974 LINK memory_ut 00:04:28.974 CC test/thread/poller_perf/poller_perf.o 00:04:28.974 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.974 CC test/nvme/reset/reset.o 00:04:28.974 CC test/nvme/sgl/sgl.o 00:04:29.232 LINK spdk_nvme 00:04:29.232 LINK scheduler 00:04:29.232 LINK poller_perf 00:04:29.232 CXX test/cpp_headers/blobfs.o 00:04:29.232 CC test/nvme/e2edp/nvme_dp.o 00:04:29.232 CC test/nvme/overhead/overhead.o 00:04:29.232 LINK spdk_bdev 00:04:29.232 LINK pci_ut 00:04:29.492 LINK sgl 00:04:29.492 LINK reset 00:04:29.492 CXX test/cpp_headers/blob.o 00:04:29.492 CC test/nvme/err_injection/err_injection.o 00:04:29.492 CXX test/cpp_headers/conf.o 00:04:29.492 CC test/nvme/startup/startup.o 00:04:29.492 LINK nvme_dp 00:04:29.492 CXX test/cpp_headers/config.o 00:04:29.492 CXX test/cpp_headers/cpuset.o 00:04:29.492 CXX test/cpp_headers/crc16.o 00:04:29.492 LINK overhead 00:04:29.492 LINK err_injection 00:04:29.492 CXX test/cpp_headers/crc32.o 00:04:29.492 CC test/nvme/reserve/reserve.o 00:04:29.752 LINK startup 00:04:29.752 CC test/nvme/simple_copy/simple_copy.o 00:04:29.752 CXX test/cpp_headers/crc64.o 00:04:29.752 CC test/nvme/connect_stress/connect_stress.o 00:04:29.752 CXX test/cpp_headers/dif.o 00:04:29.752 CXX test/cpp_headers/dma.o 00:04:29.752 CC test/nvme/boot_partition/boot_partition.o 00:04:29.752 CC test/nvme/compliance/nvme_compliance.o 00:04:29.752 LINK reserve 00:04:29.752 CC test/nvme/fused_ordering/fused_ordering.o 00:04:30.010 LINK simple_copy 00:04:30.010 CXX test/cpp_headers/endian.o 00:04:30.010 LINK connect_stress 00:04:30.010 CXX test/cpp_headers/env_dpdk.o 00:04:30.010 LINK boot_partition 00:04:30.010 CXX test/cpp_headers/env.o 00:04:30.010 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.010 LINK fused_ordering 00:04:30.010 CXX test/cpp_headers/event.o 00:04:30.010 CXX test/cpp_headers/fd_group.o 00:04:30.010 CXX test/cpp_headers/fd.o 00:04:30.010 CC test/nvme/fdp/fdp.o 00:04:30.268 CC test/nvme/cuse/cuse.o 00:04:30.268 CXX test/cpp_headers/file.o 00:04:30.268 LINK nvme_compliance 00:04:30.268 LINK doorbell_aers 00:04:30.268 CXX test/cpp_headers/ftl.o 00:04:30.268 CXX test/cpp_headers/gpt_spec.o 00:04:30.268 CXX test/cpp_headers/hexlify.o 00:04:30.268 CXX test/cpp_headers/histogram_data.o 00:04:30.268 CXX test/cpp_headers/idxd.o 00:04:30.268 CXX test/cpp_headers/idxd_spec.o 00:04:30.268 CXX test/cpp_headers/init.o 00:04:30.527 LINK fdp 00:04:30.527 CXX test/cpp_headers/ioat.o 00:04:30.527 CXX test/cpp_headers/ioat_spec.o 00:04:30.527 CXX test/cpp_headers/iscsi_spec.o 00:04:30.527 CXX test/cpp_headers/json.o 00:04:30.527 CXX test/cpp_headers/jsonrpc.o 00:04:30.527 CXX test/cpp_headers/likely.o 00:04:30.527 CXX test/cpp_headers/log.o 00:04:30.527 CXX test/cpp_headers/lvol.o 00:04:30.527 CXX test/cpp_headers/memory.o 00:04:30.527 CXX test/cpp_headers/mmio.o 00:04:30.527 CXX test/cpp_headers/nbd.o 00:04:30.527 CXX test/cpp_headers/notify.o 00:04:30.527 CXX test/cpp_headers/nvme.o 00:04:30.786 CXX test/cpp_headers/nvme_intel.o 00:04:30.786 CXX test/cpp_headers/nvme_ocssd.o 00:04:30.786 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:30.786 CXX test/cpp_headers/nvme_spec.o 00:04:30.786 CXX test/cpp_headers/nvme_zns.o 00:04:30.786 CXX test/cpp_headers/nvmf_cmd.o 00:04:30.786 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:30.786 CXX test/cpp_headers/nvmf.o 00:04:30.786 CXX test/cpp_headers/nvmf_spec.o 00:04:30.786 CXX test/cpp_headers/nvmf_transport.o 00:04:31.045 CXX test/cpp_headers/opal.o 00:04:31.045 CXX test/cpp_headers/opal_spec.o 00:04:31.045 CXX test/cpp_headers/pci_ids.o 00:04:31.045 CXX test/cpp_headers/pipe.o 00:04:31.045 CXX test/cpp_headers/queue.o 00:04:31.045 CXX test/cpp_headers/reduce.o 00:04:31.045 CXX test/cpp_headers/rpc.o 00:04:31.045 CXX test/cpp_headers/scheduler.o 00:04:31.045 CXX test/cpp_headers/scsi.o 00:04:31.045 CXX test/cpp_headers/scsi_spec.o 00:04:31.045 CXX test/cpp_headers/sock.o 00:04:31.045 CXX test/cpp_headers/stdinc.o 00:04:31.045 CXX test/cpp_headers/string.o 00:04:31.045 LINK cuse 00:04:31.303 CXX test/cpp_headers/thread.o 00:04:31.303 CXX test/cpp_headers/trace.o 00:04:31.303 CXX test/cpp_headers/trace_parser.o 00:04:31.303 CXX test/cpp_headers/tree.o 00:04:31.303 CXX test/cpp_headers/ublk.o 00:04:31.303 CXX test/cpp_headers/util.o 00:04:31.303 CXX test/cpp_headers/uuid.o 00:04:31.303 CXX test/cpp_headers/version.o 00:04:31.303 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.303 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.303 CXX test/cpp_headers/vhost.o 00:04:31.304 CXX test/cpp_headers/vmd.o 00:04:31.562 CXX test/cpp_headers/xor.o 00:04:31.562 CXX test/cpp_headers/zipf.o 00:04:32.938 LINK esnap 00:04:33.505 00:04:33.505 real 0m51.523s 00:04:33.505 user 4m45.966s 00:04:33.505 sys 1m1.522s 00:04:33.506 10:09:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:33.506 10:09:46 -- common/autotest_common.sh@10 -- $ set +x 00:04:33.506 ************************************ 00:04:33.506 END TEST make 00:04:33.506 ************************************ 00:04:33.506 10:09:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.506 10:09:46 -- nvmf/common.sh@7 -- # uname -s 00:04:33.506 10:09:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.506 10:09:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.506 10:09:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.506 10:09:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.506 10:09:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.506 10:09:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.506 10:09:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.506 10:09:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.506 10:09:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.506 10:09:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.506 10:09:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:04:33.506 10:09:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:04:33.506 10:09:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.506 10:09:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.506 10:09:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:33.506 10:09:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.506 10:09:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.506 10:09:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.506 10:09:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.506 10:09:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.506 10:09:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.506 10:09:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.506 10:09:46 -- paths/export.sh@5 -- # export PATH 00:04:33.506 10:09:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.506 10:09:46 -- nvmf/common.sh@46 -- # : 0 00:04:33.506 10:09:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:33.506 10:09:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:33.506 10:09:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:33.506 10:09:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.506 10:09:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.506 10:09:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:33.506 10:09:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:33.506 10:09:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:33.506 10:09:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:33.506 10:09:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:33.506 10:09:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:33.506 10:09:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:33.506 10:09:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.506 10:09:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:33.506 10:09:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.506 10:09:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.506 10:09:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.506 10:09:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.506 10:09:46 -- spdk/autotest.sh@48 -- # udevadm_pid=59623 00:04:33.506 10:09:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.506 10:09:46 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.506 10:09:46 -- spdk/autotest.sh@54 -- # echo 59625 00:04:33.506 10:09:46 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.506 10:09:46 -- spdk/autotest.sh@56 -- # echo 59626 00:04:33.506 10:09:46 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.765 10:09:46 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:33.765 10:09:46 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.765 10:09:46 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:33.765 10:09:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:33.765 10:09:46 -- common/autotest_common.sh@10 -- # set +x 00:04:33.765 10:09:46 -- spdk/autotest.sh@70 -- # create_test_list 00:04:33.765 10:09:46 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:33.765 10:09:46 -- common/autotest_common.sh@10 -- # set +x 00:04:33.765 10:09:46 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.765 10:09:46 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.765 10:09:47 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.765 10:09:47 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.765 10:09:47 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.765 10:09:47 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:33.765 10:09:47 -- common/autotest_common.sh@1440 -- # uname 00:04:33.765 10:09:47 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:33.765 10:09:47 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:33.765 10:09:47 -- common/autotest_common.sh@1460 -- # uname 00:04:33.765 10:09:47 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:33.765 10:09:47 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:33.765 10:09:47 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:33.765 10:09:47 -- spdk/autotest.sh@83 -- # hash lcov 00:04:33.765 10:09:47 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:33.765 10:09:47 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:33.765 --rc lcov_branch_coverage=1 00:04:33.765 --rc lcov_function_coverage=1 00:04:33.765 --rc genhtml_branch_coverage=1 00:04:33.765 --rc genhtml_function_coverage=1 00:04:33.765 --rc genhtml_legend=1 00:04:33.765 --rc geninfo_all_blocks=1 00:04:33.765 ' 00:04:33.765 10:09:47 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:33.765 --rc lcov_branch_coverage=1 00:04:33.765 --rc lcov_function_coverage=1 00:04:33.765 --rc genhtml_branch_coverage=1 00:04:33.765 --rc genhtml_function_coverage=1 00:04:33.765 --rc genhtml_legend=1 00:04:33.765 --rc geninfo_all_blocks=1 00:04:33.765 ' 00:04:33.765 10:09:47 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:33.765 --rc lcov_branch_coverage=1 00:04:33.765 --rc lcov_function_coverage=1 00:04:33.765 --rc genhtml_branch_coverage=1 00:04:33.765 --rc genhtml_function_coverage=1 00:04:33.765 --rc genhtml_legend=1 00:04:33.765 --rc geninfo_all_blocks=1 00:04:33.765 --no-external' 00:04:33.765 10:09:47 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:33.765 --rc lcov_branch_coverage=1 00:04:33.765 --rc lcov_function_coverage=1 00:04:33.765 --rc genhtml_branch_coverage=1 00:04:33.765 --rc genhtml_function_coverage=1 00:04:33.765 --rc genhtml_legend=1 00:04:33.765 --rc geninfo_all_blocks=1 00:04:33.765 --no-external' 00:04:33.765 10:09:47 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:33.765 lcov: LCOV version 1.14 00:04:33.765 10:09:47 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:41.907 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:41.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:41.907 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:41.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:41.908 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:41.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:00.019 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:00.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:00.019 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:00.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:00.019 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:00.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:00.019 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:00.020 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:00.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:00.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:00.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:04.493 10:10:17 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:04.493 10:10:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:04.493 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:05:04.493 10:10:17 -- spdk/autotest.sh@102 -- # rm -f 00:05:04.493 10:10:17 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.493 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:04.493 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:04.493 10:10:17 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:04.493 10:10:17 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:04.493 10:10:17 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:04.493 10:10:17 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:04.493 10:10:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:04.493 10:10:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:04.493 10:10:17 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:04.493 10:10:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:04.493 10:10:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:04.493 10:10:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:04.493 10:10:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:04.493 10:10:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:04.493 10:10:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:04.493 10:10:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:04.493 10:10:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:04.493 10:10:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:04.493 10:10:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.493 10:10:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:04.493 10:10:17 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:04.493 10:10:17 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:04.493 10:10:17 -- spdk/autotest.sh@121 -- # grep -v p 00:05:04.493 10:10:17 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:04.493 10:10:17 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:04.493 10:10:17 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:04.493 10:10:17 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:04.493 10:10:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.493 No valid GPT data, bailing 00:05:04.493 10:10:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.493 10:10:17 -- scripts/common.sh@393 -- # pt= 00:05:04.493 10:10:17 -- scripts/common.sh@394 -- # return 1 00:05:04.493 10:10:17 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.493 1+0 records in 00:05:04.493 1+0 records out 00:05:04.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418846 s, 250 MB/s 00:05:04.493 10:10:17 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:04.493 10:10:17 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:04.493 10:10:17 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:04.493 10:10:17 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:04.493 10:10:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.752 No valid GPT data, bailing 00:05:04.752 10:10:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.752 10:10:17 -- scripts/common.sh@393 -- # pt= 00:05:04.752 10:10:17 -- scripts/common.sh@394 -- # return 1 00:05:04.752 10:10:17 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.752 1+0 records in 00:05:04.752 1+0 records out 00:05:04.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418962 s, 250 MB/s 00:05:04.752 10:10:17 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:04.752 10:10:17 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:04.752 10:10:17 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:04.752 10:10:17 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:04.752 10:10:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:04.752 No valid GPT data, bailing 00:05:04.752 10:10:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:04.752 10:10:18 -- scripts/common.sh@393 -- # pt= 00:05:04.752 10:10:18 -- scripts/common.sh@394 -- # return 1 00:05:04.752 10:10:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:04.752 1+0 records in 00:05:04.752 1+0 records out 00:05:04.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459565 s, 228 MB/s 00:05:04.752 10:10:18 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:04.752 10:10:18 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:04.752 10:10:18 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:04.752 10:10:18 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:04.752 10:10:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:04.752 No valid GPT data, bailing 00:05:04.752 10:10:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:04.752 10:10:18 -- scripts/common.sh@393 -- # pt= 00:05:04.752 10:10:18 -- scripts/common.sh@394 -- # return 1 00:05:04.752 10:10:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:04.752 1+0 records in 00:05:04.752 1+0 records out 00:05:04.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442061 s, 237 MB/s 00:05:04.752 10:10:18 -- spdk/autotest.sh@129 -- # sync 00:05:04.752 10:10:18 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.752 10:10:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.752 10:10:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:06.652 10:10:19 -- spdk/autotest.sh@135 -- # uname -s 00:05:06.652 10:10:19 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:06.652 10:10:19 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:06.652 10:10:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.652 10:10:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.652 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.652 ************************************ 00:05:06.652 START TEST setup.sh 00:05:06.653 ************************************ 00:05:06.653 10:10:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:06.653 * Looking for test storage... 00:05:06.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.653 10:10:20 -- setup/test-setup.sh@10 -- # uname -s 00:05:06.653 10:10:20 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:06.653 10:10:20 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:06.653 10:10:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.653 10:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.653 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.653 ************************************ 00:05:06.653 START TEST acl 00:05:06.653 ************************************ 00:05:06.653 10:10:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:06.911 * Looking for test storage... 00:05:06.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.911 10:10:20 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:06.911 10:10:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:06.911 10:10:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:06.911 10:10:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:06.911 10:10:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.911 10:10:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:06.911 10:10:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:06.911 10:10:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.911 10:10:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:06.911 10:10:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:06.911 10:10:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.911 10:10:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:06.911 10:10:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:06.911 10:10:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.911 10:10:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:06.911 10:10:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:06.911 10:10:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.911 10:10:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.911 10:10:20 -- setup/acl.sh@12 -- # devs=() 00:05:06.911 10:10:20 -- setup/acl.sh@12 -- # declare -a devs 00:05:06.911 10:10:20 -- setup/acl.sh@13 -- # drivers=() 00:05:06.911 10:10:20 -- setup/acl.sh@13 -- # declare -A drivers 00:05:06.911 10:10:20 -- setup/acl.sh@51 -- # setup reset 00:05:06.911 10:10:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.911 10:10:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.478 10:10:20 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:07.478 10:10:20 -- setup/acl.sh@16 -- # local dev driver 00:05:07.478 10:10:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.478 10:10:20 -- setup/acl.sh@15 -- # setup output status 00:05:07.478 10:10:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.478 10:10:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.737 Hugepages 00:05:07.737 node hugesize free / total 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # continue 00:05:07.737 10:10:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.737 00:05:07.737 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # continue 00:05:07.737 10:10:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:07.737 10:10:21 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:07.737 10:10:21 -- setup/acl.sh@20 -- # continue 00:05:07.737 10:10:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.737 10:10:21 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:07.737 10:10:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.737 10:10:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:07.737 10:10:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.737 10:10:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.737 10:10:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.995 10:10:21 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:07.995 10:10:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.995 10:10:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:07.995 10:10:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.995 10:10:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.995 10:10:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.995 10:10:21 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:07.996 10:10:21 -- setup/acl.sh@54 -- # run_test denied denied 00:05:07.996 10:10:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.996 10:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.996 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.996 ************************************ 00:05:07.996 START TEST denied 00:05:07.996 ************************************ 00:05:07.996 10:10:21 -- common/autotest_common.sh@1104 -- # denied 00:05:07.996 10:10:21 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:07.996 10:10:21 -- setup/acl.sh@38 -- # setup output config 00:05:07.996 10:10:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.996 10:10:21 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:07.996 10:10:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.931 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:08.931 10:10:22 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:08.931 10:10:22 -- setup/acl.sh@28 -- # local dev driver 00:05:08.931 10:10:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:08.931 10:10:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:08.931 10:10:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:08.931 10:10:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:08.931 10:10:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:08.931 10:10:22 -- setup/acl.sh@41 -- # setup reset 00:05:08.931 10:10:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.931 10:10:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.497 00:05:09.497 real 0m1.426s 00:05:09.497 user 0m0.553s 00:05:09.497 sys 0m0.809s 00:05:09.498 10:10:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.498 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.498 ************************************ 00:05:09.498 END TEST denied 00:05:09.498 ************************************ 00:05:09.498 10:10:22 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:09.498 10:10:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.498 10:10:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.498 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.498 ************************************ 00:05:09.498 START TEST allowed 00:05:09.498 ************************************ 00:05:09.498 10:10:22 -- common/autotest_common.sh@1104 -- # allowed 00:05:09.498 10:10:22 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.498 10:10:22 -- setup/acl.sh@45 -- # setup output config 00:05:09.498 10:10:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.498 10:10:22 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:09.498 10:10:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.068 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.068 10:10:23 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:10.068 10:10:23 -- setup/acl.sh@28 -- # local dev driver 00:05:10.068 10:10:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:10.068 10:10:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:10.068 10:10:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:10.068 10:10:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:10.068 10:10:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:10.068 10:10:23 -- setup/acl.sh@48 -- # setup reset 00:05:10.068 10:10:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.068 10:10:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.006 ************************************ 00:05:11.006 END TEST allowed 00:05:11.006 ************************************ 00:05:11.006 00:05:11.006 real 0m1.466s 00:05:11.006 user 0m0.641s 00:05:11.006 sys 0m0.826s 00:05:11.006 10:10:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.006 10:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.006 ************************************ 00:05:11.006 END TEST acl 00:05:11.006 ************************************ 00:05:11.006 00:05:11.006 real 0m4.110s 00:05:11.006 user 0m1.738s 00:05:11.006 sys 0m2.333s 00:05:11.006 10:10:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.006 10:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.006 10:10:24 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:11.006 10:10:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.006 10:10:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.006 10:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.006 ************************************ 00:05:11.006 START TEST hugepages 00:05:11.006 ************************************ 00:05:11.006 10:10:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:11.006 * Looking for test storage... 00:05:11.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:11.006 10:10:24 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:11.006 10:10:24 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:11.006 10:10:24 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:11.006 10:10:24 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:11.006 10:10:24 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:11.006 10:10:24 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:11.006 10:10:24 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:11.006 10:10:24 -- setup/common.sh@18 -- # local node= 00:05:11.006 10:10:24 -- setup/common.sh@19 -- # local var val 00:05:11.006 10:10:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.006 10:10:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.006 10:10:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.006 10:10:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.006 10:10:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.006 10:10:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.006 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.006 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4911960 kB' 'MemAvailable: 7396128 kB' 'Buffers: 2436 kB' 'Cached: 2689396 kB' 'SwapCached: 0 kB' 'Active: 434856 kB' 'Inactive: 2360440 kB' 'Active(anon): 113956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360440 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 105200 kB' 'Mapped: 48896 kB' 'Shmem: 10492 kB' 'KReclaimable: 79532 kB' 'Slab: 157220 kB' 'SReclaimable: 79532 kB' 'SUnreclaim: 77688 kB' 'KernelStack: 6616 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.007 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.007 10:10:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # continue 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.008 10:10:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.008 10:10:24 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:11.008 10:10:24 -- setup/common.sh@33 -- # echo 2048 00:05:11.008 10:10:24 -- setup/common.sh@33 -- # return 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:11.008 10:10:24 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:11.008 10:10:24 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:11.008 10:10:24 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:11.008 10:10:24 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:11.008 10:10:24 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:11.008 10:10:24 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:11.008 10:10:24 -- setup/hugepages.sh@207 -- # get_nodes 00:05:11.008 10:10:24 -- setup/hugepages.sh@27 -- # local node 00:05:11.008 10:10:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.008 10:10:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:11.008 10:10:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.008 10:10:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.008 10:10:24 -- setup/hugepages.sh@208 -- # clear_hp 00:05:11.008 10:10:24 -- setup/hugepages.sh@37 -- # local node hp 00:05:11.008 10:10:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:11.008 10:10:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.008 10:10:24 -- setup/hugepages.sh@41 -- # echo 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.008 10:10:24 -- setup/hugepages.sh@41 -- # echo 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:11.008 10:10:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:11.008 10:10:24 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:11.008 10:10:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.008 10:10:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.008 10:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.008 ************************************ 00:05:11.008 START TEST default_setup 00:05:11.008 ************************************ 00:05:11.008 10:10:24 -- common/autotest_common.sh@1104 -- # default_setup 00:05:11.008 10:10:24 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.008 10:10:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.008 10:10:24 -- setup/hugepages.sh@51 -- # shift 00:05:11.008 10:10:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.008 10:10:24 -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.008 10:10:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.008 10:10:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.008 10:10:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.008 10:10:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.008 10:10:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.008 10:10:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.008 10:10:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.008 10:10:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.008 10:10:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.008 10:10:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.008 10:10:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.008 10:10:24 -- setup/hugepages.sh@73 -- # return 0 00:05:11.008 10:10:24 -- setup/hugepages.sh@137 -- # setup output 00:05:11.008 10:10:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.008 10:10:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.945 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.945 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.945 10:10:25 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:11.945 10:10:25 -- setup/hugepages.sh@89 -- # local node 00:05:11.945 10:10:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.945 10:10:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.945 10:10:25 -- setup/hugepages.sh@92 -- # local surp 00:05:11.945 10:10:25 -- setup/hugepages.sh@93 -- # local resv 00:05:11.945 10:10:25 -- setup/hugepages.sh@94 -- # local anon 00:05:11.945 10:10:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.945 10:10:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.945 10:10:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.945 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:11.945 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:11.945 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.945 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.945 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.945 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.945 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.945 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.945 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7013356 kB' 'MemAvailable: 9497540 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450612 kB' 'Inactive: 2360480 kB' 'Active(anon): 129712 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120900 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 79484 kB' 'Slab: 157084 kB' 'SReclaimable: 79484 kB' 'SUnreclaim: 77600 kB' 'KernelStack: 6576 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.945 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.945 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.946 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:11.946 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:11.946 10:10:25 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.946 10:10:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.946 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.946 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:11.946 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:11.946 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.946 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.946 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.946 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.946 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.946 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7013360 kB' 'MemAvailable: 9497444 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450140 kB' 'Inactive: 2360488 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120428 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6544 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.946 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.946 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.947 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:11.947 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:11.947 10:10:25 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.947 10:10:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.947 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.947 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:11.947 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:11.947 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.947 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.947 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.947 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.947 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.947 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7013360 kB' 'MemAvailable: 9497444 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450056 kB' 'Inactive: 2360488 kB' 'Active(anon): 129156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120300 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6528 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.947 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.947 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.947 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:11.947 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:11.947 10:10:25 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.947 10:10:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.947 nr_hugepages=1024 00:05:11.947 10:10:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.947 resv_hugepages=0 00:05:11.947 10:10:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.947 surplus_hugepages=0 00:05:11.947 anon_hugepages=0 00:05:11.947 10:10:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.947 10:10:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.947 10:10:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.947 10:10:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.947 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.947 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:11.947 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:11.947 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.947 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.947 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.948 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.948 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.948 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.948 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7012856 kB' 'MemAvailable: 9496940 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450172 kB' 'Inactive: 2360488 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120420 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6544 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:11.948 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.948 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.948 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.948 10:10:25 -- setup/common.sh@32 -- # continue 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.948 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.207 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.207 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.208 10:10:25 -- setup/common.sh@33 -- # echo 1024 00:05:12.208 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:12.208 10:10:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.208 10:10:25 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.208 10:10:25 -- setup/hugepages.sh@27 -- # local node 00:05:12.208 10:10:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.208 10:10:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.208 10:10:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.208 10:10:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.208 10:10:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.208 10:10:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.208 10:10:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.208 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.208 10:10:25 -- setup/common.sh@18 -- # local node=0 00:05:12.208 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:12.208 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.208 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.208 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.208 10:10:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.208 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.208 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7012872 kB' 'MemUsed: 5229108 kB' 'SwapCached: 0 kB' 'Active: 450120 kB' 'Inactive: 2360488 kB' 'Active(anon): 129220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2691860 kB' 'Mapped: 48708 kB' 'AnonPages: 120416 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.208 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.208 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.209 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.209 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.209 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:12.209 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:12.209 10:10:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.209 node0=1024 expecting 1024 00:05:12.209 ************************************ 00:05:12.209 END TEST default_setup 00:05:12.209 ************************************ 00:05:12.209 10:10:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.209 10:10:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.209 10:10:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.209 10:10:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.209 10:10:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.209 00:05:12.209 real 0m1.057s 00:05:12.209 user 0m0.511s 00:05:12.209 sys 0m0.458s 00:05:12.209 10:10:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.209 10:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.209 10:10:25 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:12.209 10:10:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.209 10:10:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.209 10:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.209 ************************************ 00:05:12.209 START TEST per_node_1G_alloc 00:05:12.209 ************************************ 00:05:12.209 10:10:25 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:12.209 10:10:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:12.209 10:10:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:12.209 10:10:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:12.209 10:10:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:12.209 10:10:25 -- setup/hugepages.sh@51 -- # shift 00:05:12.209 10:10:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:12.210 10:10:25 -- setup/hugepages.sh@52 -- # local node_ids 00:05:12.210 10:10:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.210 10:10:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:12.210 10:10:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:12.210 10:10:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:12.210 10:10:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.210 10:10:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:12.210 10:10:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.210 10:10:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.210 10:10:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.210 10:10:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:12.210 10:10:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:12.210 10:10:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:12.210 10:10:25 -- setup/hugepages.sh@73 -- # return 0 00:05:12.210 10:10:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:12.210 10:10:25 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:12.210 10:10:25 -- setup/hugepages.sh@146 -- # setup output 00:05:12.210 10:10:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.210 10:10:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.468 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.468 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.468 10:10:25 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:12.468 10:10:25 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:12.468 10:10:25 -- setup/hugepages.sh@89 -- # local node 00:05:12.468 10:10:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.468 10:10:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.468 10:10:25 -- setup/hugepages.sh@92 -- # local surp 00:05:12.468 10:10:25 -- setup/hugepages.sh@93 -- # local resv 00:05:12.468 10:10:25 -- setup/hugepages.sh@94 -- # local anon 00:05:12.468 10:10:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.468 10:10:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.468 10:10:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.468 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:12.468 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:12.468 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.468 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.468 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.468 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.468 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.468 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.468 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.468 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.468 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8057644 kB' 'MemAvailable: 10541732 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450616 kB' 'Inactive: 2360492 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156756 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6504 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:12.468 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.468 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.468 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.469 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.469 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.734 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.734 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:12.734 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:12.734 10:10:25 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.734 10:10:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.734 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.734 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:12.734 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:12.734 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.734 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.734 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.734 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.734 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.734 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.734 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8057644 kB' 'MemAvailable: 10541732 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450408 kB' 'Inactive: 2360492 kB' 'Active(anon): 129508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120584 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6560 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.735 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.735 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.736 10:10:25 -- setup/common.sh@33 -- # echo 0 00:05:12.736 10:10:25 -- setup/common.sh@33 -- # return 0 00:05:12.736 10:10:25 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.736 10:10:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.736 10:10:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.736 10:10:25 -- setup/common.sh@18 -- # local node= 00:05:12.736 10:10:25 -- setup/common.sh@19 -- # local var val 00:05:12.736 10:10:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.736 10:10:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.736 10:10:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.736 10:10:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.736 10:10:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.736 10:10:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8057848 kB' 'MemAvailable: 10541936 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450196 kB' 'Inactive: 2360492 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156780 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77512 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.736 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.736 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.737 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.737 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:12.737 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:12.737 nr_hugepages=512 00:05:12.737 resv_hugepages=0 00:05:12.737 surplus_hugepages=0 00:05:12.737 anon_hugepages=0 00:05:12.737 10:10:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.737 10:10:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:12.737 10:10:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.737 10:10:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.737 10:10:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.737 10:10:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:12.737 10:10:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:12.737 10:10:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.737 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.737 10:10:26 -- setup/common.sh@18 -- # local node= 00:05:12.737 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:12.737 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.737 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.737 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.737 10:10:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.737 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.737 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.737 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8058740 kB' 'MemAvailable: 10542828 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450196 kB' 'Inactive: 2360492 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120452 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156776 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.738 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.738 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.739 10:10:26 -- setup/common.sh@33 -- # echo 512 00:05:12.739 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:12.739 10:10:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:12.739 10:10:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.739 10:10:26 -- setup/hugepages.sh@27 -- # local node 00:05:12.739 10:10:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.739 10:10:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.739 10:10:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.739 10:10:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.739 10:10:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.739 10:10:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.739 10:10:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.739 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.739 10:10:26 -- setup/common.sh@18 -- # local node=0 00:05:12.739 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:12.739 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.739 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.739 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.739 10:10:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.739 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.739 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8058740 kB' 'MemUsed: 4183240 kB' 'SwapCached: 0 kB' 'Active: 450120 kB' 'Inactive: 2360492 kB' 'Active(anon): 129220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2691860 kB' 'Mapped: 48708 kB' 'AnonPages: 120320 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156768 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.739 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.739 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # continue 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.740 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.740 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.740 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:12.740 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:12.740 node0=512 expecting 512 00:05:12.740 ************************************ 00:05:12.740 END TEST per_node_1G_alloc 00:05:12.740 ************************************ 00:05:12.740 10:10:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.740 10:10:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.740 10:10:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.740 10:10:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.740 10:10:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:12.740 00:05:12.740 real 0m0.569s 00:05:12.740 user 0m0.273s 00:05:12.740 sys 0m0.304s 00:05:12.740 10:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.740 10:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.740 10:10:26 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:12.740 10:10:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.740 10:10:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.740 10:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.740 ************************************ 00:05:12.740 START TEST even_2G_alloc 00:05:12.740 ************************************ 00:05:12.740 10:10:26 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:12.740 10:10:26 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:12.740 10:10:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:12.740 10:10:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:12.740 10:10:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.740 10:10:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.740 10:10:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.740 10:10:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:12.740 10:10:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.740 10:10:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.740 10:10:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.740 10:10:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:12.740 10:10:26 -- setup/hugepages.sh@83 -- # : 0 00:05:12.740 10:10:26 -- setup/hugepages.sh@84 -- # : 0 00:05:12.740 10:10:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.740 10:10:26 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:12.740 10:10:26 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:12.740 10:10:26 -- setup/hugepages.sh@153 -- # setup output 00:05:12.740 10:10:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.740 10:10:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.311 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.311 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.311 10:10:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:13.311 10:10:26 -- setup/hugepages.sh@89 -- # local node 00:05:13.311 10:10:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.311 10:10:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.311 10:10:26 -- setup/hugepages.sh@92 -- # local surp 00:05:13.311 10:10:26 -- setup/hugepages.sh@93 -- # local resv 00:05:13.311 10:10:26 -- setup/hugepages.sh@94 -- # local anon 00:05:13.311 10:10:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.311 10:10:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.311 10:10:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.311 10:10:26 -- setup/common.sh@18 -- # local node= 00:05:13.311 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:13.311 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.311 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.311 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.311 10:10:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.311 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.311 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7012916 kB' 'MemAvailable: 9497004 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450232 kB' 'Inactive: 2360492 kB' 'Active(anon): 129332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120436 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156740 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6520 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.311 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.311 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.312 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:13.312 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:13.312 10:10:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.312 10:10:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.312 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.312 10:10:26 -- setup/common.sh@18 -- # local node= 00:05:13.312 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:13.312 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.312 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.312 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.312 10:10:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.312 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.312 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7013356 kB' 'MemAvailable: 9497444 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450216 kB' 'Inactive: 2360492 kB' 'Active(anon): 129316 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120472 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.312 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.312 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.313 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.313 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:13.314 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:13.314 10:10:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.314 10:10:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.314 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.314 10:10:26 -- setup/common.sh@18 -- # local node= 00:05:13.314 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:13.314 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.314 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.314 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.314 10:10:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.314 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.314 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7014020 kB' 'MemAvailable: 9498108 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450248 kB' 'Inactive: 2360492 kB' 'Active(anon): 129348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120456 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156732 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.314 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.315 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:13.315 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:13.315 nr_hugepages=1024 00:05:13.315 resv_hugepages=0 00:05:13.315 surplus_hugepages=0 00:05:13.315 anon_hugepages=0 00:05:13.315 10:10:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.315 10:10:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.315 10:10:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.315 10:10:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.315 10:10:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.315 10:10:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.315 10:10:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.315 10:10:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.315 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.315 10:10:26 -- setup/common.sh@18 -- # local node= 00:05:13.315 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:13.315 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.315 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.315 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.315 10:10:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.315 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.315 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7014020 kB' 'MemAvailable: 9498108 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450452 kB' 'Inactive: 2360492 kB' 'Active(anon): 129552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120672 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6560 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.315 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.316 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.316 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.317 10:10:26 -- setup/common.sh@33 -- # echo 1024 00:05:13.317 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:13.317 10:10:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.317 10:10:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.317 10:10:26 -- setup/hugepages.sh@27 -- # local node 00:05:13.317 10:10:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.317 10:10:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.317 10:10:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.317 10:10:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.317 10:10:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.317 10:10:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.317 10:10:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.317 10:10:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.317 10:10:26 -- setup/common.sh@18 -- # local node=0 00:05:13.317 10:10:26 -- setup/common.sh@19 -- # local var val 00:05:13.317 10:10:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.317 10:10:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.317 10:10:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.317 10:10:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.317 10:10:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.317 10:10:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7014020 kB' 'MemUsed: 5227960 kB' 'SwapCached: 0 kB' 'Active: 450092 kB' 'Inactive: 2360492 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2691860 kB' 'Mapped: 48708 kB' 'AnonPages: 120352 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156728 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.317 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.317 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # continue 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.318 10:10:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.318 10:10:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.318 10:10:26 -- setup/common.sh@33 -- # echo 0 00:05:13.318 10:10:26 -- setup/common.sh@33 -- # return 0 00:05:13.318 10:10:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.318 10:10:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.318 10:10:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.318 node0=1024 expecting 1024 00:05:13.318 ************************************ 00:05:13.318 END TEST even_2G_alloc 00:05:13.318 ************************************ 00:05:13.318 10:10:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.318 10:10:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.318 10:10:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.318 00:05:13.318 real 0m0.595s 00:05:13.318 user 0m0.285s 00:05:13.318 sys 0m0.311s 00:05:13.318 10:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.318 10:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.588 10:10:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:13.588 10:10:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.588 10:10:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.588 10:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.588 ************************************ 00:05:13.588 START TEST odd_alloc 00:05:13.588 ************************************ 00:05:13.588 10:10:26 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:13.588 10:10:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:13.588 10:10:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:13.588 10:10:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:13.588 10:10:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.588 10:10:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.588 10:10:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.588 10:10:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:13.588 10:10:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.588 10:10:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.588 10:10:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.588 10:10:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:13.588 10:10:26 -- setup/hugepages.sh@83 -- # : 0 00:05:13.588 10:10:26 -- setup/hugepages.sh@84 -- # : 0 00:05:13.588 10:10:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.588 10:10:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:13.588 10:10:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:13.588 10:10:26 -- setup/hugepages.sh@160 -- # setup output 00:05:13.588 10:10:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.588 10:10:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.862 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.862 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.862 10:10:27 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:13.862 10:10:27 -- setup/hugepages.sh@89 -- # local node 00:05:13.862 10:10:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.862 10:10:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.862 10:10:27 -- setup/hugepages.sh@92 -- # local surp 00:05:13.862 10:10:27 -- setup/hugepages.sh@93 -- # local resv 00:05:13.862 10:10:27 -- setup/hugepages.sh@94 -- # local anon 00:05:13.862 10:10:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.862 10:10:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.862 10:10:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.862 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:13.862 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:13.862 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.862 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.862 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.862 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.862 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.862 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.862 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7019796 kB' 'MemAvailable: 9503884 kB' 'Buffers: 2436 kB' 'Cached: 2689424 kB' 'SwapCached: 0 kB' 'Active: 450476 kB' 'Inactive: 2360492 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120648 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6604 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.863 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.863 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.864 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:13.864 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:13.864 10:10:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.864 10:10:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.864 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.864 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:13.864 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:13.864 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.864 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.864 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.864 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.864 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.864 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7019796 kB' 'MemAvailable: 9503888 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450332 kB' 'Inactive: 2360496 kB' 'Active(anon): 129432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120588 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156748 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77480 kB' 'KernelStack: 6588 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.864 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.864 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.865 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:13.865 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:13.865 10:10:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.865 10:10:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.865 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.865 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:13.865 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:13.865 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.865 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.865 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.865 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.865 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.865 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7019796 kB' 'MemAvailable: 9503888 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450340 kB' 'Inactive: 2360496 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120624 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156744 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77476 kB' 'KernelStack: 6604 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.865 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.865 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.866 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.866 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.867 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:13.867 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:13.867 10:10:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.867 10:10:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:13.867 nr_hugepages=1025 00:05:13.867 resv_hugepages=0 00:05:13.867 surplus_hugepages=0 00:05:13.867 anon_hugepages=0 00:05:13.867 10:10:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.867 10:10:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.867 10:10:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.867 10:10:27 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.867 10:10:27 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:13.867 10:10:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.867 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.867 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:13.867 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:13.867 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.867 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.867 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.867 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.867 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.867 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7019796 kB' 'MemAvailable: 9503888 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450396 kB' 'Inactive: 2360496 kB' 'Active(anon): 129496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156744 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77476 kB' 'KernelStack: 6604 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.867 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.867 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.868 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.868 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.127 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.127 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.128 10:10:27 -- setup/common.sh@33 -- # echo 1025 00:05:14.128 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.128 10:10:27 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:14.128 10:10:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.128 10:10:27 -- setup/hugepages.sh@27 -- # local node 00:05:14.128 10:10:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.128 10:10:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:14.128 10:10:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.128 10:10:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.128 10:10:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.128 10:10:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.128 10:10:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.128 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.128 10:10:27 -- setup/common.sh@18 -- # local node=0 00:05:14.128 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.128 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.128 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.128 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.128 10:10:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.128 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.128 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7019796 kB' 'MemUsed: 5222184 kB' 'SwapCached: 0 kB' 'Active: 450288 kB' 'Inactive: 2360496 kB' 'Active(anon): 129388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2691864 kB' 'Mapped: 48708 kB' 'AnonPages: 120476 kB' 'Shmem: 10468 kB' 'KernelStack: 6588 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156744 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.128 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.128 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.129 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.129 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.129 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:14.129 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.129 10:10:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.129 10:10:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.129 10:10:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.129 10:10:27 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:14.129 node0=1025 expecting 1025 00:05:14.129 10:10:27 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:14.129 00:05:14.129 real 0m0.591s 00:05:14.129 user 0m0.296s 00:05:14.129 sys 0m0.288s 00:05:14.129 10:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.129 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.129 ************************************ 00:05:14.129 END TEST odd_alloc 00:05:14.129 ************************************ 00:05:14.129 10:10:27 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:14.129 10:10:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.129 10:10:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.129 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.129 ************************************ 00:05:14.129 START TEST custom_alloc 00:05:14.129 ************************************ 00:05:14.129 10:10:27 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:14.129 10:10:27 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:14.129 10:10:27 -- setup/hugepages.sh@169 -- # local node 00:05:14.129 10:10:27 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:14.129 10:10:27 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:14.129 10:10:27 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:14.129 10:10:27 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:14.129 10:10:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:14.129 10:10:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.129 10:10:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.129 10:10:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.129 10:10:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.129 10:10:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.129 10:10:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.129 10:10:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@83 -- # : 0 00:05:14.129 10:10:27 -- setup/hugepages.sh@84 -- # : 0 00:05:14.129 10:10:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:14.129 10:10:27 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:14.129 10:10:27 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:14.129 10:10:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.129 10:10:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.129 10:10:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.129 10:10:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.129 10:10:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.129 10:10:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:14.129 10:10:27 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:14.129 10:10:27 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:14.129 10:10:27 -- setup/hugepages.sh@78 -- # return 0 00:05:14.129 10:10:27 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:14.129 10:10:27 -- setup/hugepages.sh@187 -- # setup output 00:05:14.129 10:10:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.129 10:10:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.390 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.390 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.390 10:10:27 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:14.390 10:10:27 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:14.390 10:10:27 -- setup/hugepages.sh@89 -- # local node 00:05:14.390 10:10:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.390 10:10:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.390 10:10:27 -- setup/hugepages.sh@92 -- # local surp 00:05:14.390 10:10:27 -- setup/hugepages.sh@93 -- # local resv 00:05:14.390 10:10:27 -- setup/hugepages.sh@94 -- # local anon 00:05:14.390 10:10:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.390 10:10:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.390 10:10:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.390 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:14.390 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.390 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.390 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.390 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.390 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.390 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.390 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8066500 kB' 'MemAvailable: 10550592 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450440 kB' 'Inactive: 2360496 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120812 kB' 'Mapped: 48832 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156740 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6584 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.390 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.390 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.391 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:14.391 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.391 10:10:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:14.391 10:10:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.391 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.391 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:14.391 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.391 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.391 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.391 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.391 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.391 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.391 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8066500 kB' 'MemAvailable: 10550592 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450188 kB' 'Inactive: 2360496 kB' 'Active(anon): 129288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120544 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6576 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.392 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.392 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.653 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:14.653 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.653 10:10:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:14.653 10:10:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.653 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.653 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:14.653 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.653 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.653 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.653 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.653 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.653 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.653 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.653 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.653 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8066500 kB' 'MemAvailable: 10550592 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450176 kB' 'Inactive: 2360496 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120524 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6576 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.654 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.654 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.655 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:14.655 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.655 10:10:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:14.655 nr_hugepages=512 00:05:14.655 10:10:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:14.655 resv_hugepages=0 00:05:14.655 10:10:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.655 surplus_hugepages=0 00:05:14.655 10:10:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.655 anon_hugepages=0 00:05:14.655 10:10:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.655 10:10:27 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.655 10:10:27 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:14.655 10:10:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.655 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.655 10:10:27 -- setup/common.sh@18 -- # local node= 00:05:14.655 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.655 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.655 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.655 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.655 10:10:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.655 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.655 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8066768 kB' 'MemAvailable: 10550860 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450248 kB' 'Inactive: 2360496 kB' 'Active(anon): 129348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120644 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6608 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.655 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.655 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.656 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.656 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.656 10:10:27 -- setup/common.sh@33 -- # echo 512 00:05:14.656 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.656 10:10:27 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.656 10:10:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.656 10:10:27 -- setup/hugepages.sh@27 -- # local node 00:05:14.656 10:10:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.656 10:10:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.656 10:10:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.656 10:10:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.656 10:10:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.656 10:10:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.657 10:10:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.657 10:10:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.657 10:10:27 -- setup/common.sh@18 -- # local node=0 00:05:14.657 10:10:27 -- setup/common.sh@19 -- # local var val 00:05:14.657 10:10:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.657 10:10:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.657 10:10:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.657 10:10:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.657 10:10:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.657 10:10:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8066768 kB' 'MemUsed: 4175212 kB' 'SwapCached: 0 kB' 'Active: 450188 kB' 'Inactive: 2360496 kB' 'Active(anon): 129288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2691864 kB' 'Mapped: 48708 kB' 'AnonPages: 120536 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156736 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.657 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.657 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.658 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.658 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.658 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.658 10:10:27 -- setup/common.sh@32 -- # continue 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.658 10:10:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.658 10:10:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.658 10:10:27 -- setup/common.sh@33 -- # echo 0 00:05:14.658 10:10:27 -- setup/common.sh@33 -- # return 0 00:05:14.658 10:10:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.658 10:10:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.658 10:10:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.658 10:10:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.658 node0=512 expecting 512 00:05:14.658 10:10:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.658 10:10:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:14.658 00:05:14.658 real 0m0.518s 00:05:14.658 user 0m0.264s 00:05:14.658 sys 0m0.281s 00:05:14.658 10:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.658 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.658 ************************************ 00:05:14.658 END TEST custom_alloc 00:05:14.658 ************************************ 00:05:14.658 10:10:27 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:14.658 10:10:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.658 10:10:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.658 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.658 ************************************ 00:05:14.658 START TEST no_shrink_alloc 00:05:14.658 ************************************ 00:05:14.658 10:10:27 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:14.658 10:10:27 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:14.658 10:10:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.658 10:10:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.658 10:10:27 -- setup/hugepages.sh@51 -- # shift 00:05:14.658 10:10:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.658 10:10:27 -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.658 10:10:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.658 10:10:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.658 10:10:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.658 10:10:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.658 10:10:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.658 10:10:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.658 10:10:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.658 10:10:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.658 10:10:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.658 10:10:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.658 10:10:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.658 10:10:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.658 10:10:27 -- setup/hugepages.sh@73 -- # return 0 00:05:14.658 10:10:27 -- setup/hugepages.sh@198 -- # setup output 00:05:14.658 10:10:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.658 10:10:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.916 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.916 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.178 10:10:28 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:15.178 10:10:28 -- setup/hugepages.sh@89 -- # local node 00:05:15.178 10:10:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.178 10:10:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.178 10:10:28 -- setup/hugepages.sh@92 -- # local surp 00:05:15.178 10:10:28 -- setup/hugepages.sh@93 -- # local resv 00:05:15.178 10:10:28 -- setup/hugepages.sh@94 -- # local anon 00:05:15.178 10:10:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.178 10:10:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.178 10:10:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.178 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.178 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.178 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.178 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.178 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.178 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.178 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.178 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7018424 kB' 'MemAvailable: 9502516 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450676 kB' 'Inactive: 2360496 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120908 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156728 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6552 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.178 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.178 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.179 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.179 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.179 10:10:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.179 10:10:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.179 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.179 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.179 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.179 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.179 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.179 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.179 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.179 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.179 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7018424 kB' 'MemAvailable: 9502516 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 450412 kB' 'Inactive: 2360496 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120668 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156724 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77456 kB' 'KernelStack: 6520 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.179 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.179 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.180 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.180 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.181 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.181 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.181 10:10:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.181 10:10:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.181 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.181 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.181 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.181 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.181 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.181 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.181 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.181 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.181 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7018424 kB' 'MemAvailable: 9502516 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 449948 kB' 'Inactive: 2360496 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120408 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156732 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6528 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.181 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.181 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.182 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.182 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.182 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.182 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.182 10:10:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.182 nr_hugepages=1024 00:05:15.182 10:10:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.182 resv_hugepages=0 00:05:15.182 10:10:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.182 surplus_hugepages=0 00:05:15.182 10:10:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.182 anon_hugepages=0 00:05:15.182 10:10:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.183 10:10:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.183 10:10:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.183 10:10:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.183 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.183 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.183 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.183 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.183 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.183 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.183 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.183 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.183 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7018424 kB' 'MemAvailable: 9502516 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 449960 kB' 'Inactive: 2360496 kB' 'Active(anon): 129060 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 48708 kB' 'Shmem: 10468 kB' 'KReclaimable: 79268 kB' 'Slab: 156732 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6528 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.183 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.183 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.184 10:10:28 -- setup/common.sh@33 -- # echo 1024 00:05:15.184 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.184 10:10:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.184 10:10:28 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.184 10:10:28 -- setup/hugepages.sh@27 -- # local node 00:05:15.184 10:10:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.184 10:10:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.184 10:10:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.184 10:10:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.184 10:10:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.184 10:10:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.184 10:10:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.184 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.184 10:10:28 -- setup/common.sh@18 -- # local node=0 00:05:15.184 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.184 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.184 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.184 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.184 10:10:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.184 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.184 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7018424 kB' 'MemUsed: 5223556 kB' 'SwapCached: 0 kB' 'Active: 450268 kB' 'Inactive: 2360496 kB' 'Active(anon): 129368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2691864 kB' 'Mapped: 48708 kB' 'AnonPages: 120492 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79268 kB' 'Slab: 156728 kB' 'SReclaimable: 79268 kB' 'SUnreclaim: 77460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.184 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.184 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.185 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.185 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.185 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.185 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.185 10:10:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.185 10:10:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.185 10:10:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.185 10:10:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.185 node0=1024 expecting 1024 00:05:15.185 10:10:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.185 10:10:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.185 10:10:28 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:15.185 10:10:28 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:15.186 10:10:28 -- setup/hugepages.sh@202 -- # setup output 00:05:15.186 10:10:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.186 10:10:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.444 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.444 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.706 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:15.706 10:10:28 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:15.706 10:10:28 -- setup/hugepages.sh@89 -- # local node 00:05:15.706 10:10:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.706 10:10:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.706 10:10:28 -- setup/hugepages.sh@92 -- # local surp 00:05:15.706 10:10:28 -- setup/hugepages.sh@93 -- # local resv 00:05:15.706 10:10:28 -- setup/hugepages.sh@94 -- # local anon 00:05:15.706 10:10:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.706 10:10:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.706 10:10:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.706 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.706 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.706 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.706 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.706 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.706 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.706 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.706 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7015912 kB' 'MemAvailable: 9500000 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 446064 kB' 'Inactive: 2360496 kB' 'Active(anon): 125164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116068 kB' 'Mapped: 48088 kB' 'Shmem: 10468 kB' 'KReclaimable: 79264 kB' 'Slab: 156556 kB' 'SReclaimable: 79264 kB' 'SUnreclaim: 77292 kB' 'KernelStack: 6488 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.706 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.706 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.707 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.707 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.707 10:10:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.707 10:10:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.707 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.707 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.707 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.707 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.707 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.707 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.707 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.707 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.707 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7015912 kB' 'MemAvailable: 9500000 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 445388 kB' 'Inactive: 2360496 kB' 'Active(anon): 124488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115832 kB' 'Mapped: 47972 kB' 'Shmem: 10468 kB' 'KReclaimable: 79264 kB' 'Slab: 156544 kB' 'SReclaimable: 79264 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6416 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.707 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.707 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.708 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.708 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.709 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.709 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.709 10:10:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.709 10:10:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.709 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.709 10:10:28 -- setup/common.sh@18 -- # local node= 00:05:15.709 10:10:28 -- setup/common.sh@19 -- # local var val 00:05:15.709 10:10:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.709 10:10:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.709 10:10:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.709 10:10:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.709 10:10:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.709 10:10:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7016952 kB' 'MemAvailable: 9501040 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 445192 kB' 'Inactive: 2360496 kB' 'Active(anon): 124292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115436 kB' 'Mapped: 47972 kB' 'Shmem: 10468 kB' 'KReclaimable: 79264 kB' 'Slab: 156528 kB' 'SReclaimable: 79264 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6432 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.709 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.709 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.710 10:10:28 -- setup/common.sh@33 -- # echo 0 00:05:15.710 10:10:28 -- setup/common.sh@33 -- # return 0 00:05:15.710 10:10:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.710 nr_hugepages=1024 00:05:15.710 10:10:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.710 resv_hugepages=0 00:05:15.710 10:10:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.710 surplus_hugepages=0 00:05:15.710 10:10:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.710 anon_hugepages=0 00:05:15.710 10:10:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.710 10:10:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.710 10:10:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.710 10:10:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.710 10:10:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.710 10:10:29 -- setup/common.sh@18 -- # local node= 00:05:15.710 10:10:29 -- setup/common.sh@19 -- # local var val 00:05:15.710 10:10:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.710 10:10:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.710 10:10:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.710 10:10:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.710 10:10:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.710 10:10:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7017036 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 2689428 kB' 'SwapCached: 0 kB' 'Active: 445188 kB' 'Inactive: 2360496 kB' 'Active(anon): 124288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115696 kB' 'Mapped: 47972 kB' 'Shmem: 10468 kB' 'KReclaimable: 79264 kB' 'Slab: 156524 kB' 'SReclaimable: 79264 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6432 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.710 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.710 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.711 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.711 10:10:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.712 10:10:29 -- setup/common.sh@33 -- # echo 1024 00:05:15.712 10:10:29 -- setup/common.sh@33 -- # return 0 00:05:15.712 10:10:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.712 10:10:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.712 10:10:29 -- setup/hugepages.sh@27 -- # local node 00:05:15.712 10:10:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.712 10:10:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.712 10:10:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.712 10:10:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.712 10:10:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.712 10:10:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.712 10:10:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.712 10:10:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.712 10:10:29 -- setup/common.sh@18 -- # local node=0 00:05:15.712 10:10:29 -- setup/common.sh@19 -- # local var val 00:05:15.712 10:10:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.712 10:10:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.712 10:10:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.712 10:10:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.712 10:10:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.712 10:10:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7017036 kB' 'MemUsed: 5224944 kB' 'SwapCached: 0 kB' 'Active: 445424 kB' 'Inactive: 2360496 kB' 'Active(anon): 124524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2360496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2691864 kB' 'Mapped: 47972 kB' 'AnonPages: 115684 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79264 kB' 'Slab: 156524 kB' 'SReclaimable: 79264 kB' 'SUnreclaim: 77260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.712 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.712 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # continue 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.713 10:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.713 10:10:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.713 10:10:29 -- setup/common.sh@33 -- # echo 0 00:05:15.713 10:10:29 -- setup/common.sh@33 -- # return 0 00:05:15.713 10:10:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.713 10:10:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.713 10:10:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.713 node0=1024 expecting 1024 00:05:15.713 10:10:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.713 10:10:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.713 10:10:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.713 00:05:15.713 real 0m1.079s 00:05:15.713 user 0m0.560s 00:05:15.713 sys 0m0.581s 00:05:15.713 10:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.713 ************************************ 00:05:15.713 END TEST no_shrink_alloc 00:05:15.713 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.713 ************************************ 00:05:15.713 10:10:29 -- setup/hugepages.sh@217 -- # clear_hp 00:05:15.713 10:10:29 -- setup/hugepages.sh@37 -- # local node hp 00:05:15.713 10:10:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:15.713 10:10:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.713 10:10:29 -- setup/hugepages.sh@41 -- # echo 0 00:05:15.713 10:10:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.713 10:10:29 -- setup/hugepages.sh@41 -- # echo 0 00:05:15.713 10:10:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:15.713 10:10:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:15.713 00:05:15.713 real 0m4.853s 00:05:15.713 user 0m2.335s 00:05:15.713 sys 0m2.486s 00:05:15.713 10:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.713 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.713 ************************************ 00:05:15.713 END TEST hugepages 00:05:15.713 ************************************ 00:05:15.713 10:10:29 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:15.713 10:10:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.713 10:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.713 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 ************************************ 00:05:15.972 START TEST driver 00:05:15.972 ************************************ 00:05:15.972 10:10:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:15.972 * Looking for test storage... 00:05:15.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:15.972 10:10:29 -- setup/driver.sh@68 -- # setup reset 00:05:15.972 10:10:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.972 10:10:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.538 10:10:29 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:16.538 10:10:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.538 10:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.538 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 ************************************ 00:05:16.538 START TEST guess_driver 00:05:16.538 ************************************ 00:05:16.538 10:10:29 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:16.538 10:10:29 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:16.538 10:10:29 -- setup/driver.sh@47 -- # local fail=0 00:05:16.538 10:10:29 -- setup/driver.sh@49 -- # pick_driver 00:05:16.538 10:10:29 -- setup/driver.sh@36 -- # vfio 00:05:16.538 10:10:29 -- setup/driver.sh@21 -- # local iommu_grups 00:05:16.538 10:10:29 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:16.538 10:10:29 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:16.538 10:10:29 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:16.538 10:10:29 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:16.538 10:10:29 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:16.538 10:10:29 -- setup/driver.sh@32 -- # return 1 00:05:16.538 10:10:29 -- setup/driver.sh@38 -- # uio 00:05:16.538 10:10:29 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:16.538 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:16.538 10:10:29 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:16.538 Looking for driver=uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:16.538 10:10:29 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:16.538 10:10:29 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:16.538 10:10:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.538 10:10:29 -- setup/driver.sh@45 -- # setup output config 00:05:16.538 10:10:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.538 10:10:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.104 10:10:30 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:17.104 10:10:30 -- setup/driver.sh@58 -- # continue 00:05:17.104 10:10:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.104 10:10:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.104 10:10:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:17.104 10:10:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.104 10:10:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.104 10:10:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:17.104 10:10:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.369 10:10:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:17.369 10:10:30 -- setup/driver.sh@65 -- # setup reset 00:05:17.369 10:10:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.369 10:10:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.935 00:05:17.935 real 0m1.370s 00:05:17.935 user 0m0.542s 00:05:17.935 sys 0m0.836s 00:05:17.935 10:10:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.935 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.935 ************************************ 00:05:17.935 END TEST guess_driver 00:05:17.935 ************************************ 00:05:17.935 00:05:17.935 real 0m1.978s 00:05:17.935 user 0m0.755s 00:05:17.935 sys 0m1.275s 00:05:17.935 10:10:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.935 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.935 ************************************ 00:05:17.935 END TEST driver 00:05:17.935 ************************************ 00:05:17.935 10:10:31 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.935 10:10:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.935 10:10:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.935 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.935 ************************************ 00:05:17.935 START TEST devices 00:05:17.935 ************************************ 00:05:17.935 10:10:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.935 * Looking for test storage... 00:05:17.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.935 10:10:31 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.935 10:10:31 -- setup/devices.sh@192 -- # setup reset 00:05:17.935 10:10:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.935 10:10:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.871 10:10:31 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.871 10:10:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:18.871 10:10:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:18.871 10:10:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:18.871 10:10:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.871 10:10:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:18.871 10:10:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:18.871 10:10:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.871 10:10:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:18.871 10:10:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:18.871 10:10:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.871 10:10:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:18.871 10:10:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:18.871 10:10:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.871 10:10:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:18.871 10:10:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:18.871 10:10:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:18.871 10:10:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.871 10:10:31 -- setup/devices.sh@196 -- # blocks=() 00:05:18.871 10:10:31 -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.871 10:10:31 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.871 10:10:31 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.871 10:10:31 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.871 10:10:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.871 10:10:31 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.871 10:10:31 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.871 10:10:31 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:18.871 10:10:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:18.871 10:10:31 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.871 10:10:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:18.871 10:10:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.871 No valid GPT data, bailing 00:05:18.871 10:10:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.871 10:10:32 -- scripts/common.sh@393 -- # pt= 00:05:18.871 10:10:32 -- scripts/common.sh@394 -- # return 1 00:05:18.871 10:10:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.871 10:10:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.871 10:10:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.871 10:10:32 -- setup/common.sh@80 -- # echo 5368709120 00:05:18.871 10:10:32 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:18.871 10:10:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.871 10:10:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:18.871 10:10:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.871 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:18.871 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.871 10:10:32 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.871 10:10:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.871 10:10:32 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:18.871 10:10:32 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:18.871 10:10:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:18.871 No valid GPT data, bailing 00:05:18.871 10:10:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.871 10:10:32 -- scripts/common.sh@393 -- # pt= 00:05:18.872 10:10:32 -- scripts/common.sh@394 -- # return 1 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:18.872 10:10:32 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:18.872 10:10:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:18.872 10:10:32 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.872 10:10:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.872 10:10:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.872 10:10:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.872 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:18.872 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.872 10:10:32 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.872 10:10:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:18.872 10:10:32 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:18.872 10:10:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:18.872 No valid GPT data, bailing 00:05:18.872 10:10:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:18.872 10:10:32 -- scripts/common.sh@393 -- # pt= 00:05:18.872 10:10:32 -- scripts/common.sh@394 -- # return 1 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:18.872 10:10:32 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:18.872 10:10:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:18.872 10:10:32 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.872 10:10:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.872 10:10:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.872 10:10:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.872 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:18.872 10:10:32 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.872 10:10:32 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.872 10:10:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:18.872 10:10:32 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:18.872 10:10:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:18.872 No valid GPT data, bailing 00:05:18.872 10:10:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:18.872 10:10:32 -- scripts/common.sh@393 -- # pt= 00:05:18.872 10:10:32 -- scripts/common.sh@394 -- # return 1 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:18.872 10:10:32 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:18.872 10:10:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:18.872 10:10:32 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.872 10:10:32 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.872 10:10:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.872 10:10:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.872 10:10:32 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:18.872 10:10:32 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.872 10:10:32 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.872 10:10:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.872 10:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.872 10:10:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.872 ************************************ 00:05:18.872 START TEST nvme_mount 00:05:18.872 ************************************ 00:05:18.872 10:10:32 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:18.872 10:10:32 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.872 10:10:32 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.872 10:10:32 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.872 10:10:32 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.872 10:10:32 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.872 10:10:32 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.872 10:10:32 -- setup/common.sh@40 -- # local part_no=1 00:05:18.872 10:10:32 -- setup/common.sh@41 -- # local size=1073741824 00:05:18.872 10:10:32 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.872 10:10:32 -- setup/common.sh@44 -- # parts=() 00:05:18.872 10:10:32 -- setup/common.sh@44 -- # local parts 00:05:18.872 10:10:32 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.872 10:10:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.872 10:10:32 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.872 10:10:32 -- setup/common.sh@46 -- # (( part++ )) 00:05:18.872 10:10:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.872 10:10:32 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:18.872 10:10:32 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.872 10:10:32 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:20.249 Creating new GPT entries in memory. 00:05:20.249 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.249 other utilities. 00:05:20.249 10:10:33 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.249 10:10:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.249 10:10:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.249 10:10:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.249 10:10:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:21.182 Creating new GPT entries in memory. 00:05:21.182 The operation has completed successfully. 00:05:21.182 10:10:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:21.182 10:10:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.182 10:10:34 -- setup/common.sh@62 -- # wait 63737 00:05:21.182 10:10:34 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.182 10:10:34 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:21.182 10:10:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.183 10:10:34 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:21.183 10:10:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:21.183 10:10:34 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.183 10:10:34 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.183 10:10:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.183 10:10:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:21.183 10:10:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.183 10:10:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.183 10:10:34 -- setup/devices.sh@53 -- # local found=0 00:05:21.183 10:10:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.183 10:10:34 -- setup/devices.sh@56 -- # : 00:05:21.183 10:10:34 -- setup/devices.sh@59 -- # local pci status 00:05:21.183 10:10:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.183 10:10:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.183 10:10:34 -- setup/devices.sh@47 -- # setup output config 00:05:21.183 10:10:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.183 10:10:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.183 10:10:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.183 10:10:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:21.183 10:10:34 -- setup/devices.sh@63 -- # found=1 00:05:21.183 10:10:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.183 10:10:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.183 10:10:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.440 10:10:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.440 10:10:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.698 10:10:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.698 10:10:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.698 10:10:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.698 10:10:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:21.698 10:10:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.698 10:10:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.698 10:10:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.698 10:10:34 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:21.698 10:10:34 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.698 10:10:34 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.698 10:10:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.698 10:10:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:21.698 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.698 10:10:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.698 10:10:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.957 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.957 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.957 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.957 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.957 10:10:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:21.957 10:10:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:21.957 10:10:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.957 10:10:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:21.957 10:10:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:21.957 10:10:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.957 10:10:35 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.957 10:10:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.957 10:10:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:21.957 10:10:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.957 10:10:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.957 10:10:35 -- setup/devices.sh@53 -- # local found=0 00:05:21.957 10:10:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.957 10:10:35 -- setup/devices.sh@56 -- # : 00:05:21.957 10:10:35 -- setup/devices.sh@59 -- # local pci status 00:05:21.957 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.957 10:10:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.957 10:10:35 -- setup/devices.sh@47 -- # setup output config 00:05:21.957 10:10:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.957 10:10:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.215 10:10:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.215 10:10:35 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:22.215 10:10:35 -- setup/devices.sh@63 -- # found=1 00:05:22.215 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.215 10:10:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.215 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.492 10:10:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.492 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.492 10:10:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.492 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.492 10:10:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.492 10:10:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:22.492 10:10:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.492 10:10:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.492 10:10:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.492 10:10:35 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.492 10:10:35 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:22.492 10:10:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:22.492 10:10:35 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:22.492 10:10:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.492 10:10:35 -- setup/devices.sh@51 -- # local test_file= 00:05:22.492 10:10:35 -- setup/devices.sh@53 -- # local found=0 00:05:22.492 10:10:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.492 10:10:35 -- setup/devices.sh@59 -- # local pci status 00:05:22.492 10:10:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.492 10:10:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:22.492 10:10:35 -- setup/devices.sh@47 -- # setup output config 00:05:22.492 10:10:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.492 10:10:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.751 10:10:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.751 10:10:36 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:22.751 10:10:36 -- setup/devices.sh@63 -- # found=1 00:05:22.751 10:10:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.751 10:10:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.751 10:10:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.010 10:10:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:23.010 10:10:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.269 10:10:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:23.269 10:10:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.269 10:10:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.269 10:10:36 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.269 10:10:36 -- setup/devices.sh@68 -- # return 0 00:05:23.269 10:10:36 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:23.269 10:10:36 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.269 10:10:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.269 10:10:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.269 10:10:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.269 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.269 00:05:23.269 real 0m4.341s 00:05:23.269 user 0m0.935s 00:05:23.269 sys 0m1.128s 00:05:23.269 10:10:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.269 10:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.269 ************************************ 00:05:23.269 END TEST nvme_mount 00:05:23.269 ************************************ 00:05:23.269 10:10:36 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:23.269 10:10:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.269 10:10:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.269 10:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.269 ************************************ 00:05:23.269 START TEST dm_mount 00:05:23.269 ************************************ 00:05:23.269 10:10:36 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:23.269 10:10:36 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:23.269 10:10:36 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:23.269 10:10:36 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:23.269 10:10:36 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:23.269 10:10:36 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.269 10:10:36 -- setup/common.sh@40 -- # local part_no=2 00:05:23.269 10:10:36 -- setup/common.sh@41 -- # local size=1073741824 00:05:23.269 10:10:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.269 10:10:36 -- setup/common.sh@44 -- # parts=() 00:05:23.269 10:10:36 -- setup/common.sh@44 -- # local parts 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.269 10:10:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.269 10:10:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.269 10:10:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.269 10:10:36 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:23.269 10:10:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.269 10:10:36 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:24.646 Creating new GPT entries in memory. 00:05:24.646 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.646 other utilities. 00:05:24.646 10:10:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.646 10:10:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.646 10:10:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.646 10:10:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.646 10:10:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:25.582 Creating new GPT entries in memory. 00:05:25.582 The operation has completed successfully. 00:05:25.582 10:10:38 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.582 10:10:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.582 10:10:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.582 10:10:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.582 10:10:38 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:26.518 The operation has completed successfully. 00:05:26.518 10:10:39 -- setup/common.sh@57 -- # (( part++ )) 00:05:26.518 10:10:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.518 10:10:39 -- setup/common.sh@62 -- # wait 64196 00:05:26.518 10:10:39 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:26.518 10:10:39 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.518 10:10:39 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.518 10:10:39 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:26.518 10:10:39 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:26.518 10:10:39 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.518 10:10:39 -- setup/devices.sh@161 -- # break 00:05:26.518 10:10:39 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.518 10:10:39 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:26.518 10:10:39 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:26.518 10:10:39 -- setup/devices.sh@166 -- # dm=dm-0 00:05:26.518 10:10:39 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:26.518 10:10:39 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:26.518 10:10:39 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.518 10:10:39 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:26.518 10:10:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.518 10:10:39 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.518 10:10:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:26.518 10:10:39 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.518 10:10:39 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.518 10:10:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.518 10:10:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:26.518 10:10:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.518 10:10:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.518 10:10:39 -- setup/devices.sh@53 -- # local found=0 00:05:26.518 10:10:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.518 10:10:39 -- setup/devices.sh@56 -- # : 00:05:26.518 10:10:39 -- setup/devices.sh@59 -- # local pci status 00:05:26.518 10:10:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.518 10:10:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.518 10:10:39 -- setup/devices.sh@47 -- # setup output config 00:05:26.518 10:10:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.518 10:10:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.777 10:10:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.777 10:10:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:26.777 10:10:39 -- setup/devices.sh@63 -- # found=1 00:05:26.777 10:10:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.777 10:10:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.777 10:10:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.034 10:10:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.034 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.034 10:10:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.034 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.034 10:10:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.034 10:10:40 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:27.034 10:10:40 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.034 10:10:40 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:27.034 10:10:40 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.034 10:10:40 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.292 10:10:40 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:27.292 10:10:40 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.292 10:10:40 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:27.292 10:10:40 -- setup/devices.sh@50 -- # local mount_point= 00:05:27.292 10:10:40 -- setup/devices.sh@51 -- # local test_file= 00:05:27.292 10:10:40 -- setup/devices.sh@53 -- # local found=0 00:05:27.292 10:10:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.292 10:10:40 -- setup/devices.sh@59 -- # local pci status 00:05:27.292 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.292 10:10:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.292 10:10:40 -- setup/devices.sh@47 -- # setup output config 00:05:27.292 10:10:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.292 10:10:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.292 10:10:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.292 10:10:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:27.292 10:10:40 -- setup/devices.sh@63 -- # found=1 00:05:27.292 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.292 10:10:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.292 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.550 10:10:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.550 10:10:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.808 10:10:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.808 10:10:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.808 10:10:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.808 10:10:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.808 10:10:41 -- setup/devices.sh@68 -- # return 0 00:05:27.808 10:10:41 -- setup/devices.sh@187 -- # cleanup_dm 00:05:27.808 10:10:41 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.808 10:10:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:27.808 10:10:41 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:27.808 10:10:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.808 10:10:41 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:27.808 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.808 10:10:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:27.808 10:10:41 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:27.808 00:05:27.808 real 0m4.497s 00:05:27.808 user 0m0.633s 00:05:27.808 sys 0m0.814s 00:05:27.808 10:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.808 10:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.808 ************************************ 00:05:27.808 END TEST dm_mount 00:05:27.808 ************************************ 00:05:27.808 10:10:41 -- setup/devices.sh@1 -- # cleanup 00:05:27.808 10:10:41 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:27.808 10:10:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.808 10:10:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.808 10:10:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:27.808 10:10:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.808 10:10:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.066 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:28.066 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:28.066 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:28.066 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:28.066 10:10:41 -- setup/devices.sh@12 -- # cleanup_dm 00:05:28.066 10:10:41 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.066 10:10:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:28.066 10:10:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.066 10:10:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:28.066 10:10:41 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.066 10:10:41 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:28.066 00:05:28.066 real 0m10.324s 00:05:28.066 user 0m2.194s 00:05:28.066 sys 0m2.514s 00:05:28.066 10:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.066 ************************************ 00:05:28.066 10:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.066 END TEST devices 00:05:28.066 ************************************ 00:05:28.324 00:05:28.324 real 0m21.528s 00:05:28.324 user 0m7.109s 00:05:28.324 sys 0m8.777s 00:05:28.324 10:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.324 10:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.324 ************************************ 00:05:28.324 END TEST setup.sh 00:05:28.324 ************************************ 00:05:28.324 10:10:41 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:28.324 Hugepages 00:05:28.324 node hugesize free / total 00:05:28.324 node0 1048576kB 0 / 0 00:05:28.324 node0 2048kB 2048 / 2048 00:05:28.324 00:05:28.324 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:28.582 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:28.582 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:28.582 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:28.582 10:10:41 -- spdk/autotest.sh@141 -- # uname -s 00:05:28.582 10:10:41 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:28.582 10:10:41 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:28.582 10:10:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.528 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.528 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.528 10:10:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:30.462 10:10:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:30.462 10:10:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:30.462 10:10:43 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:30.462 10:10:43 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:30.462 10:10:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:30.462 10:10:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:30.462 10:10:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.462 10:10:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.462 10:10:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.720 10:10:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:30.720 10:10:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:30.720 10:10:43 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.978 Waiting for block devices as requested 00:05:30.978 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.978 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.236 10:10:44 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:31.236 10:10:44 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:31.236 10:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:31.236 10:10:44 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:31.236 10:10:44 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:31.236 10:10:44 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:31.236 10:10:44 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:31.236 10:10:44 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:31.236 10:10:44 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:31.236 10:10:44 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:31.236 10:10:44 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:31.236 10:10:44 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:31.236 10:10:44 -- common/autotest_common.sh@1542 -- # continue 00:05:31.236 10:10:44 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:31.236 10:10:44 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:31.236 10:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:31.236 10:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:31.237 10:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:31.237 10:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:31.237 10:10:44 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:31.237 10:10:44 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:31.237 10:10:44 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:31.237 10:10:44 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:31.237 10:10:44 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:31.237 10:10:44 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:31.237 10:10:44 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:31.237 10:10:44 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:31.237 10:10:44 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:31.237 10:10:44 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:31.237 10:10:44 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:31.237 10:10:44 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:31.237 10:10:44 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:31.237 10:10:44 -- common/autotest_common.sh@1542 -- # continue 00:05:31.237 10:10:44 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:31.237 10:10:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.237 10:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.237 10:10:44 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:31.237 10:10:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.237 10:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.237 10:10:44 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.061 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.061 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.061 10:10:45 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:32.061 10:10:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.061 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.061 10:10:45 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:32.061 10:10:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:32.061 10:10:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:32.061 10:10:45 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:32.061 10:10:45 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:32.061 10:10:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:32.061 10:10:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:32.061 10:10:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:32.061 10:10:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.061 10:10:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.061 10:10:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:32.319 10:10:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:32.319 10:10:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:32.319 10:10:45 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:32.319 10:10:45 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:32.319 10:10:45 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:32.319 10:10:45 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.319 10:10:45 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:32.319 10:10:45 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:32.319 10:10:45 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:32.319 10:10:45 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.319 10:10:45 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:32.319 10:10:45 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:32.319 10:10:45 -- common/autotest_common.sh@1578 -- # return 0 00:05:32.319 10:10:45 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:32.319 10:10:45 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:32.319 10:10:45 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:32.319 10:10:45 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:32.319 10:10:45 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:32.319 10:10:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.319 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.319 10:10:45 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.319 10:10:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.319 10:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.319 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.319 ************************************ 00:05:32.319 START TEST env 00:05:32.319 ************************************ 00:05:32.319 10:10:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.319 * Looking for test storage... 00:05:32.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:32.319 10:10:45 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:32.319 10:10:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.319 10:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.319 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.319 ************************************ 00:05:32.319 START TEST env_memory 00:05:32.319 ************************************ 00:05:32.319 10:10:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:32.319 00:05:32.319 00:05:32.319 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.319 http://cunit.sourceforge.net/ 00:05:32.319 00:05:32.319 00:05:32.319 Suite: memory 00:05:32.319 Test: alloc and free memory map ...[2024-07-26 10:10:45.699480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:32.319 passed 00:05:32.320 Test: mem map translation ...[2024-07-26 10:10:45.732547] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:32.320 [2024-07-26 10:10:45.732623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:32.320 [2024-07-26 10:10:45.732696] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:32.320 [2024-07-26 10:10:45.732711] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.578 passed 00:05:32.578 Test: mem map registration ...[2024-07-26 10:10:45.796666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:32.578 [2024-07-26 10:10:45.796730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:32.578 passed 00:05:32.578 Test: mem map adjacent registrations ...passed 00:05:32.578 00:05:32.578 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.578 suites 1 1 n/a 0 0 00:05:32.578 tests 4 4 4 0 0 00:05:32.578 asserts 152 152 152 0 n/a 00:05:32.578 00:05:32.578 Elapsed time = 0.216 seconds 00:05:32.578 00:05:32.578 real 0m0.232s 00:05:32.578 user 0m0.214s 00:05:32.578 sys 0m0.015s 00:05:32.578 10:10:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.578 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.578 ************************************ 00:05:32.578 END TEST env_memory 00:05:32.578 ************************************ 00:05:32.578 10:10:45 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.578 10:10:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.578 10:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.578 10:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.578 ************************************ 00:05:32.578 START TEST env_vtophys 00:05:32.578 ************************************ 00:05:32.578 10:10:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.578 EAL: lib.eal log level changed from notice to debug 00:05:32.578 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 1 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 2 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 3 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 4 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 5 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 6 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 7 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 8 as core 0 on socket 0 00:05:32.578 EAL: Detected lcore 9 as core 0 on socket 0 00:05:32.578 EAL: Maximum logical cores by configuration: 128 00:05:32.578 EAL: Detected CPU lcores: 10 00:05:32.578 EAL: Detected NUMA nodes: 1 00:05:32.578 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:32.578 EAL: Detected shared linkage of DPDK 00:05:32.578 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:32.578 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:32.578 EAL: Registered [vdev] bus. 00:05:32.578 EAL: bus.vdev log level changed from disabled to notice 00:05:32.578 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:32.578 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:32.578 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:32.578 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:32.579 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:32.579 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:32.579 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:32.579 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:32.579 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.579 EAL: No shared files mode enabled, IPC is disabled 00:05:32.579 EAL: Selected IOVA mode 'PA' 00:05:32.579 EAL: Probing VFIO support... 00:05:32.579 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.579 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:32.579 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.579 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.579 EAL: Setting up physically contiguous memory... 00:05:32.579 EAL: Setting maximum number of open files to 524288 00:05:32.579 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.579 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.579 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.579 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.579 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.579 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.579 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.579 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.579 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.579 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.579 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.579 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.579 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.579 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.579 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.579 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.579 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.579 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.579 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.579 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.579 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.579 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.579 EAL: Hugepages will be freed exactly as allocated. 00:05:32.579 EAL: No shared files mode enabled, IPC is disabled 00:05:32.579 EAL: No shared files mode enabled, IPC is disabled 00:05:32.837 EAL: TSC frequency is ~2200000 KHz 00:05:32.837 EAL: Main lcore 0 is ready (tid=7f0a8f6b4a00;cpuset=[0]) 00:05:32.837 EAL: Trying to obtain current memory policy. 00:05:32.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.837 EAL: Restoring previous memory policy: 0 00:05:32.837 EAL: request: mp_malloc_sync 00:05:32.837 EAL: No shared files mode enabled, IPC is disabled 00:05:32.837 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.837 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.837 EAL: No shared files mode enabled, IPC is disabled 00:05:32.837 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.837 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.837 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:32.837 00:05:32.837 00:05:32.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.837 http://cunit.sourceforge.net/ 00:05:32.837 00:05:32.837 00:05:32.837 Suite: components_suite 00:05:32.837 Test: vtophys_malloc_test ...passed 00:05:32.837 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.837 EAL: Restoring previous memory policy: 4 00:05:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.837 EAL: request: mp_malloc_sync 00:05:32.837 EAL: No shared files mode enabled, IPC is disabled 00:05:32.837 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.837 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was shrunk by 130MB 00:05:32.838 EAL: Trying to obtain current memory policy. 00:05:32.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.838 EAL: Restoring previous memory policy: 4 00:05:32.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.838 EAL: request: mp_malloc_sync 00:05:32.838 EAL: No shared files mode enabled, IPC is disabled 00:05:32.838 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.096 EAL: request: mp_malloc_sync 00:05:33.096 EAL: No shared files mode enabled, IPC is disabled 00:05:33.096 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.096 EAL: Trying to obtain current memory policy. 00:05:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.096 EAL: Restoring previous memory policy: 4 00:05:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.096 EAL: request: mp_malloc_sync 00:05:33.096 EAL: No shared files mode enabled, IPC is disabled 00:05:33.096 EAL: Heap on socket 0 was expanded by 514MB 00:05:33.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.354 EAL: request: mp_malloc_sync 00:05:33.354 EAL: No shared files mode enabled, IPC is disabled 00:05:33.354 EAL: Heap on socket 0 was shrunk by 514MB 00:05:33.354 EAL: Trying to obtain current memory policy. 00:05:33.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.613 EAL: Restoring previous memory policy: 4 00:05:33.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.613 EAL: request: mp_malloc_sync 00:05:33.613 EAL: No shared files mode enabled, IPC is disabled 00:05:33.613 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.152 passed 00:05:34.152 00:05:34.152 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.152 suites 1 1 n/a 0 0 00:05:34.152 tests 2 2 2 0 0 00:05:34.152 asserts 5169 5169 5169 0 n/a 00:05:34.152 00:05:34.152 Elapsed time = 1.229 seconds 00:05:34.152 EAL: request: mp_malloc_sync 00:05:34.152 EAL: No shared files mode enabled, IPC is disabled 00:05:34.152 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.152 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.152 EAL: request: mp_malloc_sync 00:05:34.152 EAL: No shared files mode enabled, IPC is disabled 00:05:34.152 EAL: Heap on socket 0 was shrunk by 2MB 00:05:34.152 EAL: No shared files mode enabled, IPC is disabled 00:05:34.152 EAL: No shared files mode enabled, IPC is disabled 00:05:34.152 EAL: No shared files mode enabled, IPC is disabled 00:05:34.152 00:05:34.152 real 0m1.421s 00:05:34.152 user 0m0.774s 00:05:34.152 sys 0m0.518s 00:05:34.152 10:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.152 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 ************************************ 00:05:34.152 END TEST env_vtophys 00:05:34.152 ************************************ 00:05:34.152 10:10:47 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.152 10:10:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.152 10:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.152 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 ************************************ 00:05:34.152 START TEST env_pci 00:05:34.152 ************************************ 00:05:34.152 10:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.152 00:05:34.152 00:05:34.152 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.152 http://cunit.sourceforge.net/ 00:05:34.152 00:05:34.152 00:05:34.152 Suite: pci 00:05:34.152 Test: pci_hook ...[2024-07-26 10:10:47.420614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65328 has claimed it 00:05:34.152 passed 00:05:34.152 00:05:34.152 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.152 suites 1 1 n/a 0 0 00:05:34.152 EAL: Cannot find device (10000:00:01.0) 00:05:34.152 EAL: Failed to attach device on primary process 00:05:34.152 tests 1 1 1 0 0 00:05:34.152 asserts 25 25 25 0 n/a 00:05:34.152 00:05:34.152 Elapsed time = 0.002 seconds 00:05:34.152 00:05:34.152 real 0m0.019s 00:05:34.152 user 0m0.006s 00:05:34.152 sys 0m0.012s 00:05:34.152 10:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.152 ************************************ 00:05:34.152 END TEST env_pci 00:05:34.152 ************************************ 00:05:34.152 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 10:10:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.152 10:10:47 -- env/env.sh@15 -- # uname 00:05:34.152 10:10:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.152 10:10:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.152 10:10:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.152 10:10:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:34.152 10:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.152 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.152 ************************************ 00:05:34.152 START TEST env_dpdk_post_init 00:05:34.152 ************************************ 00:05:34.152 10:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.152 EAL: Detected CPU lcores: 10 00:05:34.152 EAL: Detected NUMA nodes: 1 00:05:34.152 EAL: Detected shared linkage of DPDK 00:05:34.152 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.152 EAL: Selected IOVA mode 'PA' 00:05:34.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:34.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:34.411 Starting DPDK initialization... 00:05:34.411 Starting SPDK post initialization... 00:05:34.411 SPDK NVMe probe 00:05:34.411 Attaching to 0000:00:06.0 00:05:34.411 Attaching to 0000:00:07.0 00:05:34.411 Attached to 0000:00:06.0 00:05:34.411 Attached to 0000:00:07.0 00:05:34.411 Cleaning up... 00:05:34.411 00:05:34.411 real 0m0.178s 00:05:34.411 user 0m0.040s 00:05:34.411 sys 0m0.037s 00:05:34.411 10:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.411 ************************************ 00:05:34.411 END TEST env_dpdk_post_init 00:05:34.411 ************************************ 00:05:34.411 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.411 10:10:47 -- env/env.sh@26 -- # uname 00:05:34.411 10:10:47 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.411 10:10:47 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.411 10:10:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.411 10:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.411 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.411 ************************************ 00:05:34.411 START TEST env_mem_callbacks 00:05:34.411 ************************************ 00:05:34.411 10:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.411 EAL: Detected CPU lcores: 10 00:05:34.411 EAL: Detected NUMA nodes: 1 00:05:34.411 EAL: Detected shared linkage of DPDK 00:05:34.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.411 EAL: Selected IOVA mode 'PA' 00:05:34.411 00:05:34.411 00:05:34.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.411 http://cunit.sourceforge.net/ 00:05:34.411 00:05:34.411 00:05:34.411 Suite: memory 00:05:34.411 Test: test ... 00:05:34.411 register 0x200000200000 2097152 00:05:34.411 malloc 3145728 00:05:34.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.411 register 0x200000400000 4194304 00:05:34.411 buf 0x200000500000 len 3145728 PASSED 00:05:34.411 malloc 64 00:05:34.411 buf 0x2000004fff40 len 64 PASSED 00:05:34.411 malloc 4194304 00:05:34.411 register 0x200000800000 6291456 00:05:34.411 buf 0x200000a00000 len 4194304 PASSED 00:05:34.411 free 0x200000500000 3145728 00:05:34.411 free 0x2000004fff40 64 00:05:34.411 unregister 0x200000400000 4194304 PASSED 00:05:34.411 free 0x200000a00000 4194304 00:05:34.411 unregister 0x200000800000 6291456 PASSED 00:05:34.411 malloc 8388608 00:05:34.411 register 0x200000400000 10485760 00:05:34.411 buf 0x200000600000 len 8388608 PASSED 00:05:34.411 free 0x200000600000 8388608 00:05:34.411 unregister 0x200000400000 10485760 PASSED 00:05:34.411 passed 00:05:34.411 00:05:34.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.411 suites 1 1 n/a 0 0 00:05:34.411 tests 1 1 1 0 0 00:05:34.411 asserts 15 15 15 0 n/a 00:05:34.411 00:05:34.411 Elapsed time = 0.009 seconds 00:05:34.411 00:05:34.411 real 0m0.138s 00:05:34.411 user 0m0.019s 00:05:34.411 sys 0m0.019s 00:05:34.411 10:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.411 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.411 ************************************ 00:05:34.411 END TEST env_mem_callbacks 00:05:34.411 ************************************ 00:05:34.669 00:05:34.669 real 0m2.329s 00:05:34.669 user 0m1.178s 00:05:34.669 sys 0m0.806s 00:05:34.669 10:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.669 ************************************ 00:05:34.669 END TEST env 00:05:34.669 ************************************ 00:05:34.669 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.669 10:10:47 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.669 10:10:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.669 10:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.669 10:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.669 ************************************ 00:05:34.669 START TEST rpc 00:05:34.669 ************************************ 00:05:34.669 10:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.669 * Looking for test storage... 00:05:34.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.669 10:10:48 -- rpc/rpc.sh@65 -- # spdk_pid=65437 00:05:34.669 10:10:48 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.669 10:10:48 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.669 10:10:48 -- rpc/rpc.sh@67 -- # waitforlisten 65437 00:05:34.669 10:10:48 -- common/autotest_common.sh@819 -- # '[' -z 65437 ']' 00:05:34.669 10:10:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.669 10:10:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.669 10:10:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.669 10:10:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.669 10:10:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.669 [2024-07-26 10:10:48.059832] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:34.670 [2024-07-26 10:10:48.059919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65437 ] 00:05:34.928 [2024-07-26 10:10:48.191515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.928 [2024-07-26 10:10:48.282003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.928 [2024-07-26 10:10:48.282166] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.928 [2024-07-26 10:10:48.282181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65437' to capture a snapshot of events at runtime. 00:05:34.928 [2024-07-26 10:10:48.282189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65437 for offline analysis/debug. 00:05:34.928 [2024-07-26 10:10:48.282218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.863 10:10:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.863 10:10:49 -- common/autotest_common.sh@852 -- # return 0 00:05:35.863 10:10:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.863 10:10:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.863 10:10:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:35.863 10:10:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:35.863 10:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.863 10:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.863 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.863 ************************************ 00:05:35.863 START TEST rpc_integrity 00:05:35.863 ************************************ 00:05:35.863 10:10:49 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:35.863 10:10:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.863 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.863 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.863 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.864 10:10:49 -- rpc/rpc.sh@13 -- # jq length 00:05:35.864 10:10:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.864 10:10:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.864 10:10:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.864 { 00:05:35.864 "name": "Malloc0", 00:05:35.864 "aliases": [ 00:05:35.864 "713409d7-a81e-4472-9e46-050bf239ba30" 00:05:35.864 ], 00:05:35.864 "product_name": "Malloc disk", 00:05:35.864 "block_size": 512, 00:05:35.864 "num_blocks": 16384, 00:05:35.864 "uuid": "713409d7-a81e-4472-9e46-050bf239ba30", 00:05:35.864 "assigned_rate_limits": { 00:05:35.864 "rw_ios_per_sec": 0, 00:05:35.864 "rw_mbytes_per_sec": 0, 00:05:35.864 "r_mbytes_per_sec": 0, 00:05:35.864 "w_mbytes_per_sec": 0 00:05:35.864 }, 00:05:35.864 "claimed": false, 00:05:35.864 "zoned": false, 00:05:35.864 "supported_io_types": { 00:05:35.864 "read": true, 00:05:35.864 "write": true, 00:05:35.864 "unmap": true, 00:05:35.864 "write_zeroes": true, 00:05:35.864 "flush": true, 00:05:35.864 "reset": true, 00:05:35.864 "compare": false, 00:05:35.864 "compare_and_write": false, 00:05:35.864 "abort": true, 00:05:35.864 "nvme_admin": false, 00:05:35.864 "nvme_io": false 00:05:35.864 }, 00:05:35.864 "memory_domains": [ 00:05:35.864 { 00:05:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.864 "dma_device_type": 2 00:05:35.864 } 00:05:35.864 ], 00:05:35.864 "driver_specific": {} 00:05:35.864 } 00:05:35.864 ]' 00:05:35.864 10:10:49 -- rpc/rpc.sh@17 -- # jq length 00:05:35.864 10:10:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.864 10:10:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 [2024-07-26 10:10:49.212607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.864 [2024-07-26 10:10:49.212666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.864 [2024-07-26 10:10:49.212694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x857a10 00:05:35.864 [2024-07-26 10:10:49.212705] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.864 [2024-07-26 10:10:49.214411] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.864 [2024-07-26 10:10:49.214446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.864 Passthru0 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.864 { 00:05:35.864 "name": "Malloc0", 00:05:35.864 "aliases": [ 00:05:35.864 "713409d7-a81e-4472-9e46-050bf239ba30" 00:05:35.864 ], 00:05:35.864 "product_name": "Malloc disk", 00:05:35.864 "block_size": 512, 00:05:35.864 "num_blocks": 16384, 00:05:35.864 "uuid": "713409d7-a81e-4472-9e46-050bf239ba30", 00:05:35.864 "assigned_rate_limits": { 00:05:35.864 "rw_ios_per_sec": 0, 00:05:35.864 "rw_mbytes_per_sec": 0, 00:05:35.864 "r_mbytes_per_sec": 0, 00:05:35.864 "w_mbytes_per_sec": 0 00:05:35.864 }, 00:05:35.864 "claimed": true, 00:05:35.864 "claim_type": "exclusive_write", 00:05:35.864 "zoned": false, 00:05:35.864 "supported_io_types": { 00:05:35.864 "read": true, 00:05:35.864 "write": true, 00:05:35.864 "unmap": true, 00:05:35.864 "write_zeroes": true, 00:05:35.864 "flush": true, 00:05:35.864 "reset": true, 00:05:35.864 "compare": false, 00:05:35.864 "compare_and_write": false, 00:05:35.864 "abort": true, 00:05:35.864 "nvme_admin": false, 00:05:35.864 "nvme_io": false 00:05:35.864 }, 00:05:35.864 "memory_domains": [ 00:05:35.864 { 00:05:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.864 "dma_device_type": 2 00:05:35.864 } 00:05:35.864 ], 00:05:35.864 "driver_specific": {} 00:05:35.864 }, 00:05:35.864 { 00:05:35.864 "name": "Passthru0", 00:05:35.864 "aliases": [ 00:05:35.864 "0565ce13-6bde-51cb-b94c-a5b66f6637c2" 00:05:35.864 ], 00:05:35.864 "product_name": "passthru", 00:05:35.864 "block_size": 512, 00:05:35.864 "num_blocks": 16384, 00:05:35.864 "uuid": "0565ce13-6bde-51cb-b94c-a5b66f6637c2", 00:05:35.864 "assigned_rate_limits": { 00:05:35.864 "rw_ios_per_sec": 0, 00:05:35.864 "rw_mbytes_per_sec": 0, 00:05:35.864 "r_mbytes_per_sec": 0, 00:05:35.864 "w_mbytes_per_sec": 0 00:05:35.864 }, 00:05:35.864 "claimed": false, 00:05:35.864 "zoned": false, 00:05:35.864 "supported_io_types": { 00:05:35.864 "read": true, 00:05:35.864 "write": true, 00:05:35.864 "unmap": true, 00:05:35.864 "write_zeroes": true, 00:05:35.864 "flush": true, 00:05:35.864 "reset": true, 00:05:35.864 "compare": false, 00:05:35.864 "compare_and_write": false, 00:05:35.864 "abort": true, 00:05:35.864 "nvme_admin": false, 00:05:35.864 "nvme_io": false 00:05:35.864 }, 00:05:35.864 "memory_domains": [ 00:05:35.864 { 00:05:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.864 "dma_device_type": 2 00:05:35.864 } 00:05:35.864 ], 00:05:35.864 "driver_specific": { 00:05:35.864 "passthru": { 00:05:35.864 "name": "Passthru0", 00:05:35.864 "base_bdev_name": "Malloc0" 00:05:35.864 } 00:05:35.864 } 00:05:35.864 } 00:05:35.864 ]' 00:05:35.864 10:10:49 -- rpc/rpc.sh@21 -- # jq length 00:05:35.864 10:10:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.864 10:10:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.864 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.864 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.864 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.864 10:10:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.864 10:10:49 -- rpc/rpc.sh@26 -- # jq length 00:05:36.123 10:10:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.123 00:05:36.123 real 0m0.307s 00:05:36.123 user 0m0.208s 00:05:36.123 sys 0m0.033s 00:05:36.123 ************************************ 00:05:36.123 END TEST rpc_integrity 00:05:36.123 ************************************ 00:05:36.123 10:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 10:10:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.123 10:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.123 10:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 ************************************ 00:05:36.123 START TEST rpc_plugins 00:05:36.123 ************************************ 00:05:36.123 10:10:49 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:36.123 10:10:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.123 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.123 10:10:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.123 10:10:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.123 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.123 10:10:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:36.123 { 00:05:36.123 "name": "Malloc1", 00:05:36.123 "aliases": [ 00:05:36.123 "1aa23531-03f3-4931-b201-c137713b45b6" 00:05:36.123 ], 00:05:36.123 "product_name": "Malloc disk", 00:05:36.123 "block_size": 4096, 00:05:36.123 "num_blocks": 256, 00:05:36.123 "uuid": "1aa23531-03f3-4931-b201-c137713b45b6", 00:05:36.123 "assigned_rate_limits": { 00:05:36.123 "rw_ios_per_sec": 0, 00:05:36.123 "rw_mbytes_per_sec": 0, 00:05:36.123 "r_mbytes_per_sec": 0, 00:05:36.123 "w_mbytes_per_sec": 0 00:05:36.123 }, 00:05:36.123 "claimed": false, 00:05:36.123 "zoned": false, 00:05:36.123 "supported_io_types": { 00:05:36.123 "read": true, 00:05:36.123 "write": true, 00:05:36.123 "unmap": true, 00:05:36.123 "write_zeroes": true, 00:05:36.123 "flush": true, 00:05:36.123 "reset": true, 00:05:36.123 "compare": false, 00:05:36.123 "compare_and_write": false, 00:05:36.123 "abort": true, 00:05:36.123 "nvme_admin": false, 00:05:36.123 "nvme_io": false 00:05:36.123 }, 00:05:36.123 "memory_domains": [ 00:05:36.123 { 00:05:36.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.123 "dma_device_type": 2 00:05:36.123 } 00:05:36.123 ], 00:05:36.123 "driver_specific": {} 00:05:36.123 } 00:05:36.123 ]' 00:05:36.123 10:10:49 -- rpc/rpc.sh@32 -- # jq length 00:05:36.123 10:10:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:36.123 10:10:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:36.123 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.123 10:10:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:36.123 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.123 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.123 10:10:49 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:36.123 10:10:49 -- rpc/rpc.sh@36 -- # jq length 00:05:36.123 10:10:49 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:36.123 00:05:36.123 real 0m0.154s 00:05:36.123 user 0m0.099s 00:05:36.123 sys 0m0.020s 00:05:36.123 ************************************ 00:05:36.123 END TEST rpc_plugins 00:05:36.123 ************************************ 00:05:36.123 10:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.123 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.382 10:10:49 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:36.382 10:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.382 10:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.382 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.382 ************************************ 00:05:36.382 START TEST rpc_trace_cmd_test 00:05:36.382 ************************************ 00:05:36.382 10:10:49 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:36.382 10:10:49 -- rpc/rpc.sh@40 -- # local info 00:05:36.382 10:10:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:36.382 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.382 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.382 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.382 10:10:49 -- rpc/rpc.sh@42 -- # info='{ 00:05:36.382 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65437", 00:05:36.382 "tpoint_group_mask": "0x8", 00:05:36.382 "iscsi_conn": { 00:05:36.382 "mask": "0x2", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "scsi": { 00:05:36.382 "mask": "0x4", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "bdev": { 00:05:36.382 "mask": "0x8", 00:05:36.382 "tpoint_mask": "0xffffffffffffffff" 00:05:36.382 }, 00:05:36.382 "nvmf_rdma": { 00:05:36.382 "mask": "0x10", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "nvmf_tcp": { 00:05:36.382 "mask": "0x20", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "ftl": { 00:05:36.382 "mask": "0x40", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "blobfs": { 00:05:36.382 "mask": "0x80", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "dsa": { 00:05:36.382 "mask": "0x200", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "thread": { 00:05:36.382 "mask": "0x400", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "nvme_pcie": { 00:05:36.382 "mask": "0x800", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "iaa": { 00:05:36.382 "mask": "0x1000", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "nvme_tcp": { 00:05:36.382 "mask": "0x2000", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 }, 00:05:36.382 "bdev_nvme": { 00:05:36.382 "mask": "0x4000", 00:05:36.382 "tpoint_mask": "0x0" 00:05:36.382 } 00:05:36.382 }' 00:05:36.382 10:10:49 -- rpc/rpc.sh@43 -- # jq length 00:05:36.382 10:10:49 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:36.382 10:10:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:36.382 10:10:49 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:36.382 10:10:49 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.382 10:10:49 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.382 10:10:49 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.641 10:10:49 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.641 10:10:49 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.641 10:10:49 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.641 00:05:36.641 real 0m0.303s 00:05:36.641 user 0m0.268s 00:05:36.641 sys 0m0.025s 00:05:36.641 ************************************ 00:05:36.641 END TEST rpc_trace_cmd_test 00:05:36.641 ************************************ 00:05:36.641 10:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.641 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 10:10:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.641 10:10:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.641 10:10:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.641 10:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.641 10:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.641 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 ************************************ 00:05:36.641 START TEST rpc_daemon_integrity 00:05:36.641 ************************************ 00:05:36.641 10:10:49 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:36.641 10:10:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.641 10:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.641 10:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 10:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.641 10:10:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.641 10:10:49 -- rpc/rpc.sh@13 -- # jq length 00:05:36.641 10:10:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.641 10:10:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.641 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.641 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.641 10:10:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.641 10:10:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.641 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.641 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.641 10:10:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.641 { 00:05:36.642 "name": "Malloc2", 00:05:36.642 "aliases": [ 00:05:36.642 "8a393a9d-8ac8-4391-8cf8-85aadbf3daba" 00:05:36.642 ], 00:05:36.642 "product_name": "Malloc disk", 00:05:36.642 "block_size": 512, 00:05:36.642 "num_blocks": 16384, 00:05:36.642 "uuid": "8a393a9d-8ac8-4391-8cf8-85aadbf3daba", 00:05:36.642 "assigned_rate_limits": { 00:05:36.642 "rw_ios_per_sec": 0, 00:05:36.642 "rw_mbytes_per_sec": 0, 00:05:36.642 "r_mbytes_per_sec": 0, 00:05:36.642 "w_mbytes_per_sec": 0 00:05:36.642 }, 00:05:36.642 "claimed": false, 00:05:36.642 "zoned": false, 00:05:36.642 "supported_io_types": { 00:05:36.642 "read": true, 00:05:36.642 "write": true, 00:05:36.642 "unmap": true, 00:05:36.642 "write_zeroes": true, 00:05:36.642 "flush": true, 00:05:36.642 "reset": true, 00:05:36.642 "compare": false, 00:05:36.642 "compare_and_write": false, 00:05:36.642 "abort": true, 00:05:36.642 "nvme_admin": false, 00:05:36.642 "nvme_io": false 00:05:36.642 }, 00:05:36.642 "memory_domains": [ 00:05:36.642 { 00:05:36.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.642 "dma_device_type": 2 00:05:36.642 } 00:05:36.642 ], 00:05:36.642 "driver_specific": {} 00:05:36.642 } 00:05:36.642 ]' 00:05:36.642 10:10:50 -- rpc/rpc.sh@17 -- # jq length 00:05:36.900 10:10:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.900 10:10:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:36.900 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.900 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.900 [2024-07-26 10:10:50.105151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:36.900 [2024-07-26 10:10:50.105218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.900 [2024-07-26 10:10:50.105240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x68f830 00:05:36.900 [2024-07-26 10:10:50.105249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.900 [2024-07-26 10:10:50.106741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.901 [2024-07-26 10:10:50.106774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.901 Passthru0 00:05:36.901 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.901 10:10:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.901 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.901 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.901 10:10:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.901 { 00:05:36.901 "name": "Malloc2", 00:05:36.901 "aliases": [ 00:05:36.901 "8a393a9d-8ac8-4391-8cf8-85aadbf3daba" 00:05:36.901 ], 00:05:36.901 "product_name": "Malloc disk", 00:05:36.901 "block_size": 512, 00:05:36.901 "num_blocks": 16384, 00:05:36.901 "uuid": "8a393a9d-8ac8-4391-8cf8-85aadbf3daba", 00:05:36.901 "assigned_rate_limits": { 00:05:36.901 "rw_ios_per_sec": 0, 00:05:36.901 "rw_mbytes_per_sec": 0, 00:05:36.901 "r_mbytes_per_sec": 0, 00:05:36.901 "w_mbytes_per_sec": 0 00:05:36.901 }, 00:05:36.901 "claimed": true, 00:05:36.901 "claim_type": "exclusive_write", 00:05:36.901 "zoned": false, 00:05:36.901 "supported_io_types": { 00:05:36.901 "read": true, 00:05:36.901 "write": true, 00:05:36.901 "unmap": true, 00:05:36.901 "write_zeroes": true, 00:05:36.901 "flush": true, 00:05:36.901 "reset": true, 00:05:36.901 "compare": false, 00:05:36.901 "compare_and_write": false, 00:05:36.901 "abort": true, 00:05:36.901 "nvme_admin": false, 00:05:36.901 "nvme_io": false 00:05:36.901 }, 00:05:36.901 "memory_domains": [ 00:05:36.901 { 00:05:36.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.901 "dma_device_type": 2 00:05:36.901 } 00:05:36.901 ], 00:05:36.901 "driver_specific": {} 00:05:36.901 }, 00:05:36.901 { 00:05:36.901 "name": "Passthru0", 00:05:36.901 "aliases": [ 00:05:36.901 "1adf2589-481e-57d0-8c11-c8cd8f8164c3" 00:05:36.901 ], 00:05:36.901 "product_name": "passthru", 00:05:36.901 "block_size": 512, 00:05:36.901 "num_blocks": 16384, 00:05:36.901 "uuid": "1adf2589-481e-57d0-8c11-c8cd8f8164c3", 00:05:36.901 "assigned_rate_limits": { 00:05:36.901 "rw_ios_per_sec": 0, 00:05:36.901 "rw_mbytes_per_sec": 0, 00:05:36.901 "r_mbytes_per_sec": 0, 00:05:36.901 "w_mbytes_per_sec": 0 00:05:36.901 }, 00:05:36.901 "claimed": false, 00:05:36.901 "zoned": false, 00:05:36.901 "supported_io_types": { 00:05:36.901 "read": true, 00:05:36.901 "write": true, 00:05:36.901 "unmap": true, 00:05:36.901 "write_zeroes": true, 00:05:36.901 "flush": true, 00:05:36.901 "reset": true, 00:05:36.901 "compare": false, 00:05:36.901 "compare_and_write": false, 00:05:36.901 "abort": true, 00:05:36.901 "nvme_admin": false, 00:05:36.901 "nvme_io": false 00:05:36.901 }, 00:05:36.901 "memory_domains": [ 00:05:36.901 { 00:05:36.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.901 "dma_device_type": 2 00:05:36.901 } 00:05:36.901 ], 00:05:36.901 "driver_specific": { 00:05:36.901 "passthru": { 00:05:36.901 "name": "Passthru0", 00:05:36.901 "base_bdev_name": "Malloc2" 00:05:36.901 } 00:05:36.901 } 00:05:36.901 } 00:05:36.901 ]' 00:05:36.901 10:10:50 -- rpc/rpc.sh@21 -- # jq length 00:05:36.901 10:10:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.901 10:10:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.901 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.901 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.901 10:10:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:36.901 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.901 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.901 10:10:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.901 10:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.901 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 10:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.901 10:10:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.901 10:10:50 -- rpc/rpc.sh@26 -- # jq length 00:05:36.901 10:10:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.901 00:05:36.901 real 0m0.294s 00:05:36.901 user 0m0.193s 00:05:36.901 sys 0m0.036s 00:05:36.901 ************************************ 00:05:36.901 END TEST rpc_daemon_integrity 00:05:36.901 ************************************ 00:05:36.901 10:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.901 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 10:10:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:36.901 10:10:50 -- rpc/rpc.sh@84 -- # killprocess 65437 00:05:36.901 10:10:50 -- common/autotest_common.sh@926 -- # '[' -z 65437 ']' 00:05:36.901 10:10:50 -- common/autotest_common.sh@930 -- # kill -0 65437 00:05:36.901 10:10:50 -- common/autotest_common.sh@931 -- # uname 00:05:36.901 10:10:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.901 10:10:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65437 00:05:36.901 10:10:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.901 killing process with pid 65437 00:05:36.901 10:10:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.901 10:10:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65437' 00:05:36.901 10:10:50 -- common/autotest_common.sh@945 -- # kill 65437 00:05:36.901 10:10:50 -- common/autotest_common.sh@950 -- # wait 65437 00:05:37.468 00:05:37.468 real 0m2.754s 00:05:37.468 user 0m3.624s 00:05:37.468 sys 0m0.624s 00:05:37.468 ************************************ 00:05:37.468 END TEST rpc 00:05:37.468 ************************************ 00:05:37.468 10:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.468 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 10:10:50 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:37.468 10:10:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.468 10:10:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.468 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 ************************************ 00:05:37.468 START TEST rpc_client 00:05:37.468 ************************************ 00:05:37.468 10:10:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:37.468 * Looking for test storage... 00:05:37.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:37.468 10:10:50 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:37.468 OK 00:05:37.468 10:10:50 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:37.468 00:05:37.468 real 0m0.095s 00:05:37.468 user 0m0.041s 00:05:37.468 sys 0m0.061s 00:05:37.468 10:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.468 ************************************ 00:05:37.468 END TEST rpc_client 00:05:37.468 ************************************ 00:05:37.468 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 10:10:50 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:37.468 10:10:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.468 10:10:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.468 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 ************************************ 00:05:37.468 START TEST json_config 00:05:37.468 ************************************ 00:05:37.468 10:10:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:37.727 10:10:50 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:37.727 10:10:50 -- nvmf/common.sh@7 -- # uname -s 00:05:37.727 10:10:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.727 10:10:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.727 10:10:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.727 10:10:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.727 10:10:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.727 10:10:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.727 10:10:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.727 10:10:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.727 10:10:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.727 10:10:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.727 10:10:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:05:37.727 10:10:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:05:37.727 10:10:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.727 10:10:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.727 10:10:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.727 10:10:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.727 10:10:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.727 10:10:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.727 10:10:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.727 10:10:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.727 10:10:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.727 10:10:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.727 10:10:50 -- paths/export.sh@5 -- # export PATH 00:05:37.727 10:10:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.727 10:10:50 -- nvmf/common.sh@46 -- # : 0 00:05:37.727 10:10:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:37.727 10:10:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:37.727 10:10:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:37.728 10:10:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.728 10:10:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.728 10:10:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:37.728 10:10:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:37.728 10:10:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:37.728 10:10:50 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:37.728 10:10:50 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:37.728 10:10:50 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:37.728 10:10:50 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:37.728 10:10:50 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:37.728 10:10:50 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:37.728 10:10:50 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:37.728 10:10:50 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:37.728 10:10:50 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:37.728 10:10:50 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:37.728 10:10:50 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:37.728 INFO: JSON configuration test init 00:05:37.728 10:10:50 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:37.728 10:10:50 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:37.728 10:10:50 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:37.728 10:10:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.728 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 10:10:50 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:37.728 10:10:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.728 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 10:10:50 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:37.728 10:10:50 -- json_config/json_config.sh@98 -- # local app=target 00:05:37.728 10:10:50 -- json_config/json_config.sh@99 -- # shift 00:05:37.728 10:10:50 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:37.728 10:10:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:37.728 10:10:50 -- json_config/json_config.sh@111 -- # app_pid[$app]=65673 00:05:37.728 Waiting for target to run... 00:05:37.728 10:10:50 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:37.728 10:10:50 -- json_config/json_config.sh@114 -- # waitforlisten 65673 /var/tmp/spdk_tgt.sock 00:05:37.728 10:10:50 -- common/autotest_common.sh@819 -- # '[' -z 65673 ']' 00:05:37.728 10:10:50 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:37.728 10:10:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.728 10:10:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.728 10:10:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.728 10:10:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.728 10:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 [2024-07-26 10:10:51.052351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:37.728 [2024-07-26 10:10:51.052529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65673 ] 00:05:38.295 [2024-07-26 10:10:51.484371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.296 [2024-07-26 10:10:51.551713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.296 [2024-07-26 10:10:51.551918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.862 10:10:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.862 10:10:52 -- common/autotest_common.sh@852 -- # return 0 00:05:38.862 00:05:38.862 10:10:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:38.862 10:10:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:38.862 10:10:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:38.862 10:10:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.862 10:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.862 10:10:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:38.862 10:10:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:38.862 10:10:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.862 10:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.862 10:10:52 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:38.862 10:10:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:38.862 10:10:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:39.121 10:10:52 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:39.121 10:10:52 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:39.121 10:10:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.121 10:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.121 10:10:52 -- json_config/json_config.sh@48 -- # local ret=0 00:05:39.121 10:10:52 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:39.121 10:10:52 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:39.121 10:10:52 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:39.121 10:10:52 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:39.121 10:10:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:39.379 10:10:52 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:39.379 10:10:52 -- json_config/json_config.sh@51 -- # local get_types 00:05:39.379 10:10:52 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:39.379 10:10:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.379 10:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.379 10:10:52 -- json_config/json_config.sh@58 -- # return 0 00:05:39.379 10:10:52 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:39.379 10:10:52 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:39.379 10:10:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.379 10:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.379 10:10:52 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:39.379 10:10:52 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:39.379 10:10:52 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:39.379 10:10:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:39.636 MallocForNvmf0 00:05:39.636 10:10:53 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.636 10:10:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.895 MallocForNvmf1 00:05:39.895 10:10:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:39.895 10:10:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:40.154 [2024-07-26 10:10:53.548166] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.154 10:10:53 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.154 10:10:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.412 10:10:53 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.412 10:10:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.670 10:10:54 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:40.671 10:10:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:40.929 10:10:54 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:40.929 10:10:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.187 [2024-07-26 10:10:54.472734] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.187 10:10:54 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:41.187 10:10:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.187 10:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.187 10:10:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:41.187 10:10:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.187 10:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.187 10:10:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:41.187 10:10:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.187 10:10:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.479 MallocBdevForConfigChangeCheck 00:05:41.479 10:10:54 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:41.479 10:10:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.479 10:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.479 10:10:54 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:41.479 10:10:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.047 INFO: shutting down applications... 00:05:42.047 10:10:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:42.047 10:10:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:42.047 10:10:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:42.047 10:10:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:42.047 10:10:55 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:42.305 Calling clear_iscsi_subsystem 00:05:42.305 Calling clear_nvmf_subsystem 00:05:42.305 Calling clear_nbd_subsystem 00:05:42.305 Calling clear_ublk_subsystem 00:05:42.305 Calling clear_vhost_blk_subsystem 00:05:42.305 Calling clear_vhost_scsi_subsystem 00:05:42.305 Calling clear_scheduler_subsystem 00:05:42.305 Calling clear_bdev_subsystem 00:05:42.305 Calling clear_accel_subsystem 00:05:42.305 Calling clear_vmd_subsystem 00:05:42.305 Calling clear_sock_subsystem 00:05:42.305 Calling clear_iobuf_subsystem 00:05:42.305 10:10:55 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:42.305 10:10:55 -- json_config/json_config.sh@396 -- # count=100 00:05:42.305 10:10:55 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:42.305 10:10:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:42.305 10:10:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.305 10:10:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:42.563 10:10:55 -- json_config/json_config.sh@398 -- # break 00:05:42.563 10:10:55 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:42.563 10:10:55 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:42.563 10:10:55 -- json_config/json_config.sh@120 -- # local app=target 00:05:42.563 10:10:55 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:42.563 10:10:55 -- json_config/json_config.sh@124 -- # [[ -n 65673 ]] 00:05:42.563 10:10:55 -- json_config/json_config.sh@127 -- # kill -SIGINT 65673 00:05:42.563 10:10:55 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:42.563 10:10:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:42.563 10:10:55 -- json_config/json_config.sh@130 -- # kill -0 65673 00:05:42.563 10:10:55 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:43.128 10:10:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:43.128 10:10:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.128 10:10:56 -- json_config/json_config.sh@130 -- # kill -0 65673 00:05:43.128 10:10:56 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:43.128 10:10:56 -- json_config/json_config.sh@132 -- # break 00:05:43.128 10:10:56 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:43.128 10:10:56 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:43.128 SPDK target shutdown done 00:05:43.128 INFO: relaunching applications... 00:05:43.128 10:10:56 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:43.128 10:10:56 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:43.128 10:10:56 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.128 10:10:56 -- json_config/json_config.sh@99 -- # shift 00:05:43.128 10:10:56 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.128 10:10:56 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.128 10:10:56 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.128 10:10:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.128 10:10:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.128 10:10:56 -- json_config/json_config.sh@111 -- # app_pid[$app]=65864 00:05:43.128 Waiting for target to run... 00:05:43.128 10:10:56 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.128 10:10:56 -- json_config/json_config.sh@114 -- # waitforlisten 65864 /var/tmp/spdk_tgt.sock 00:05:43.128 10:10:56 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:43.128 10:10:56 -- common/autotest_common.sh@819 -- # '[' -z 65864 ']' 00:05:43.128 10:10:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.128 10:10:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.128 10:10:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.128 10:10:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.128 10:10:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.128 [2024-07-26 10:10:56.534020] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:43.128 [2024-07-26 10:10:56.534110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65864 ] 00:05:43.693 [2024-07-26 10:10:56.938352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.693 [2024-07-26 10:10:57.004277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.693 [2024-07-26 10:10:57.004447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.951 [2024-07-26 10:10:57.310890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.951 [2024-07-26 10:10:57.342965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:44.209 10:10:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.209 10:10:57 -- common/autotest_common.sh@852 -- # return 0 00:05:44.209 00:05:44.209 10:10:57 -- json_config/json_config.sh@115 -- # echo '' 00:05:44.209 10:10:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:44.209 INFO: Checking if target configuration is the same... 00:05:44.209 10:10:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:44.209 10:10:57 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.209 10:10:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:44.209 10:10:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.209 + '[' 2 -ne 2 ']' 00:05:44.209 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:44.209 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:44.209 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:44.209 +++ basename /dev/fd/62 00:05:44.209 ++ mktemp /tmp/62.XXX 00:05:44.209 + tmp_file_1=/tmp/62.VkZ 00:05:44.210 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.210 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.210 + tmp_file_2=/tmp/spdk_tgt_config.json.rjQ 00:05:44.210 + ret=0 00:05:44.210 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:44.467 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:44.468 + diff -u /tmp/62.VkZ /tmp/spdk_tgt_config.json.rjQ 00:05:44.725 INFO: JSON config files are the same 00:05:44.725 + echo 'INFO: JSON config files are the same' 00:05:44.725 + rm /tmp/62.VkZ /tmp/spdk_tgt_config.json.rjQ 00:05:44.725 + exit 0 00:05:44.725 10:10:57 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:44.725 INFO: changing configuration and checking if this can be detected... 00:05:44.725 10:10:57 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:44.725 10:10:57 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.725 10:10:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.725 10:10:58 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.725 10:10:58 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:44.725 10:10:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.725 + '[' 2 -ne 2 ']' 00:05:44.725 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:44.725 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:44.725 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:44.725 +++ basename /dev/fd/62 00:05:44.983 ++ mktemp /tmp/62.XXX 00:05:44.983 + tmp_file_1=/tmp/62.Rkc 00:05:44.983 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.983 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.983 + tmp_file_2=/tmp/spdk_tgt_config.json.Gof 00:05:44.983 + ret=0 00:05:44.983 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.241 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.241 + diff -u /tmp/62.Rkc /tmp/spdk_tgt_config.json.Gof 00:05:45.241 + ret=1 00:05:45.242 + echo '=== Start of file: /tmp/62.Rkc ===' 00:05:45.242 + cat /tmp/62.Rkc 00:05:45.242 + echo '=== End of file: /tmp/62.Rkc ===' 00:05:45.242 + echo '' 00:05:45.242 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Gof ===' 00:05:45.242 + cat /tmp/spdk_tgt_config.json.Gof 00:05:45.242 + echo '=== End of file: /tmp/spdk_tgt_config.json.Gof ===' 00:05:45.242 + echo '' 00:05:45.242 + rm /tmp/62.Rkc /tmp/spdk_tgt_config.json.Gof 00:05:45.242 + exit 1 00:05:45.242 INFO: configuration change detected. 00:05:45.242 10:10:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:45.242 10:10:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:45.242 10:10:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:45.242 10:10:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.242 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.242 10:10:58 -- json_config/json_config.sh@360 -- # local ret=0 00:05:45.242 10:10:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:45.242 10:10:58 -- json_config/json_config.sh@370 -- # [[ -n 65864 ]] 00:05:45.242 10:10:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:45.242 10:10:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:45.242 10:10:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.242 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.242 10:10:58 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:45.242 10:10:58 -- json_config/json_config.sh@246 -- # uname -s 00:05:45.242 10:10:58 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:45.242 10:10:58 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:45.242 10:10:58 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:45.242 10:10:58 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:45.242 10:10:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.242 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.242 10:10:58 -- json_config/json_config.sh@376 -- # killprocess 65864 00:05:45.242 10:10:58 -- common/autotest_common.sh@926 -- # '[' -z 65864 ']' 00:05:45.242 10:10:58 -- common/autotest_common.sh@930 -- # kill -0 65864 00:05:45.242 10:10:58 -- common/autotest_common.sh@931 -- # uname 00:05:45.242 10:10:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.242 10:10:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65864 00:05:45.242 10:10:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.242 10:10:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.242 killing process with pid 65864 00:05:45.242 10:10:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65864' 00:05:45.242 10:10:58 -- common/autotest_common.sh@945 -- # kill 65864 00:05:45.242 10:10:58 -- common/autotest_common.sh@950 -- # wait 65864 00:05:45.500 10:10:58 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:45.500 10:10:58 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:45.500 10:10:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.500 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.500 10:10:58 -- json_config/json_config.sh@381 -- # return 0 00:05:45.500 INFO: Success 00:05:45.500 10:10:58 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:45.500 00:05:45.500 real 0m8.035s 00:05:45.500 user 0m11.425s 00:05:45.500 sys 0m1.687s 00:05:45.500 10:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.500 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.500 ************************************ 00:05:45.500 END TEST json_config 00:05:45.500 ************************************ 00:05:45.758 10:10:58 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.758 10:10:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.758 10:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.758 10:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.758 ************************************ 00:05:45.758 START TEST json_config_extra_key 00:05:45.758 ************************************ 00:05:45.758 10:10:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.758 10:10:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.758 10:10:59 -- nvmf/common.sh@7 -- # uname -s 00:05:45.758 10:10:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.758 10:10:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.758 10:10:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.758 10:10:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.758 10:10:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.758 10:10:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.758 10:10:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.758 10:10:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.758 10:10:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.758 10:10:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.758 10:10:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:05:45.758 10:10:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:05:45.758 10:10:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.758 10:10:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.758 10:10:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.759 10:10:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.759 10:10:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.759 10:10:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.759 10:10:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.759 10:10:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.759 10:10:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.759 10:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.759 10:10:59 -- paths/export.sh@5 -- # export PATH 00:05:45.759 10:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.759 10:10:59 -- nvmf/common.sh@46 -- # : 0 00:05:45.759 10:10:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:45.759 10:10:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:45.759 10:10:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:45.759 10:10:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.759 10:10:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.759 10:10:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:45.759 10:10:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:45.759 10:10:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.759 INFO: launching applications... 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=65998 00:05:45.759 Waiting for target to run... 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.759 10:10:59 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 65998 /var/tmp/spdk_tgt.sock 00:05:45.759 10:10:59 -- common/autotest_common.sh@819 -- # '[' -z 65998 ']' 00:05:45.759 10:10:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.759 10:10:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.759 10:10:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.759 10:10:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.759 10:10:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.759 [2024-07-26 10:10:59.093176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:45.759 [2024-07-26 10:10:59.093280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65998 ] 00:05:46.326 [2024-07-26 10:10:59.515843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.326 [2024-07-26 10:10:59.577146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.326 [2024-07-26 10:10:59.577334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.893 10:11:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.893 00:05:46.893 10:11:00 -- common/autotest_common.sh@852 -- # return 0 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:46.893 INFO: shutting down applications... 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 65998 ]] 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 65998 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 65998 00:05:46.893 10:11:00 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 65998 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:47.151 SPDK target shutdown done 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:47.151 Success 00:05:47.151 10:11:00 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:47.151 00:05:47.151 real 0m1.592s 00:05:47.151 user 0m1.458s 00:05:47.151 sys 0m0.392s 00:05:47.151 10:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.151 ************************************ 00:05:47.151 END TEST json_config_extra_key 00:05:47.151 ************************************ 00:05:47.151 10:11:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.151 10:11:00 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.151 10:11:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.151 10:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.151 10:11:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.410 ************************************ 00:05:47.410 START TEST alias_rpc 00:05:47.410 ************************************ 00:05:47.410 10:11:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.410 * Looking for test storage... 00:05:47.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:47.410 10:11:00 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.410 10:11:00 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66067 00:05:47.410 10:11:00 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66067 00:05:47.410 10:11:00 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.410 10:11:00 -- common/autotest_common.sh@819 -- # '[' -z 66067 ']' 00:05:47.410 10:11:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.410 10:11:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.410 10:11:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.410 10:11:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.410 10:11:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.410 [2024-07-26 10:11:00.729416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:47.410 [2024-07-26 10:11:00.729510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66067 ] 00:05:47.410 [2024-07-26 10:11:00.863028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.667 [2024-07-26 10:11:00.955655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.667 [2024-07-26 10:11:00.955830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.604 10:11:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.604 10:11:01 -- common/autotest_common.sh@852 -- # return 0 00:05:48.604 10:11:01 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:48.604 10:11:01 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66067 00:05:48.604 10:11:01 -- common/autotest_common.sh@926 -- # '[' -z 66067 ']' 00:05:48.604 10:11:01 -- common/autotest_common.sh@930 -- # kill -0 66067 00:05:48.604 10:11:01 -- common/autotest_common.sh@931 -- # uname 00:05:48.604 10:11:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.604 10:11:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66067 00:05:48.604 10:11:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.604 killing process with pid 66067 00:05:48.604 10:11:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.604 10:11:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66067' 00:05:48.604 10:11:01 -- common/autotest_common.sh@945 -- # kill 66067 00:05:48.604 10:11:01 -- common/autotest_common.sh@950 -- # wait 66067 00:05:49.172 ************************************ 00:05:49.172 END TEST alias_rpc 00:05:49.172 ************************************ 00:05:49.172 00:05:49.172 real 0m1.737s 00:05:49.172 user 0m1.993s 00:05:49.172 sys 0m0.397s 00:05:49.173 10:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.173 10:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.173 10:11:02 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:49.173 10:11:02 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.173 10:11:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.173 10:11:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.173 10:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.173 ************************************ 00:05:49.173 START TEST spdkcli_tcp 00:05:49.173 ************************************ 00:05:49.173 10:11:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.173 * Looking for test storage... 00:05:49.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:49.173 10:11:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:49.173 10:11:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.173 10:11:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:49.173 10:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66142 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@27 -- # waitforlisten 66142 00:05:49.173 10:11:02 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.173 10:11:02 -- common/autotest_common.sh@819 -- # '[' -z 66142 ']' 00:05:49.173 10:11:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.173 10:11:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.173 10:11:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.173 10:11:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.173 10:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.173 [2024-07-26 10:11:02.537251] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:49.173 [2024-07-26 10:11:02.537375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66142 ] 00:05:49.431 [2024-07-26 10:11:02.679883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.432 [2024-07-26 10:11:02.779364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.432 [2024-07-26 10:11:02.779914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.432 [2024-07-26 10:11:02.779932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.008 10:11:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.009 10:11:03 -- common/autotest_common.sh@852 -- # return 0 00:05:50.009 10:11:03 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.009 10:11:03 -- spdkcli/tcp.sh@31 -- # socat_pid=66159 00:05:50.009 10:11:03 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.278 [ 00:05:50.278 "bdev_malloc_delete", 00:05:50.278 "bdev_malloc_create", 00:05:50.278 "bdev_null_resize", 00:05:50.278 "bdev_null_delete", 00:05:50.278 "bdev_null_create", 00:05:50.278 "bdev_nvme_cuse_unregister", 00:05:50.278 "bdev_nvme_cuse_register", 00:05:50.278 "bdev_opal_new_user", 00:05:50.278 "bdev_opal_set_lock_state", 00:05:50.278 "bdev_opal_delete", 00:05:50.278 "bdev_opal_get_info", 00:05:50.278 "bdev_opal_create", 00:05:50.278 "bdev_nvme_opal_revert", 00:05:50.278 "bdev_nvme_opal_init", 00:05:50.278 "bdev_nvme_send_cmd", 00:05:50.278 "bdev_nvme_get_path_iostat", 00:05:50.278 "bdev_nvme_get_mdns_discovery_info", 00:05:50.278 "bdev_nvme_stop_mdns_discovery", 00:05:50.278 "bdev_nvme_start_mdns_discovery", 00:05:50.278 "bdev_nvme_set_multipath_policy", 00:05:50.278 "bdev_nvme_set_preferred_path", 00:05:50.278 "bdev_nvme_get_io_paths", 00:05:50.278 "bdev_nvme_remove_error_injection", 00:05:50.278 "bdev_nvme_add_error_injection", 00:05:50.278 "bdev_nvme_get_discovery_info", 00:05:50.278 "bdev_nvme_stop_discovery", 00:05:50.278 "bdev_nvme_start_discovery", 00:05:50.278 "bdev_nvme_get_controller_health_info", 00:05:50.278 "bdev_nvme_disable_controller", 00:05:50.278 "bdev_nvme_enable_controller", 00:05:50.278 "bdev_nvme_reset_controller", 00:05:50.278 "bdev_nvme_get_transport_statistics", 00:05:50.278 "bdev_nvme_apply_firmware", 00:05:50.278 "bdev_nvme_detach_controller", 00:05:50.278 "bdev_nvme_get_controllers", 00:05:50.278 "bdev_nvme_attach_controller", 00:05:50.278 "bdev_nvme_set_hotplug", 00:05:50.278 "bdev_nvme_set_options", 00:05:50.278 "bdev_passthru_delete", 00:05:50.278 "bdev_passthru_create", 00:05:50.278 "bdev_lvol_grow_lvstore", 00:05:50.278 "bdev_lvol_get_lvols", 00:05:50.278 "bdev_lvol_get_lvstores", 00:05:50.278 "bdev_lvol_delete", 00:05:50.278 "bdev_lvol_set_read_only", 00:05:50.278 "bdev_lvol_resize", 00:05:50.278 "bdev_lvol_decouple_parent", 00:05:50.278 "bdev_lvol_inflate", 00:05:50.278 "bdev_lvol_rename", 00:05:50.278 "bdev_lvol_clone_bdev", 00:05:50.278 "bdev_lvol_clone", 00:05:50.278 "bdev_lvol_snapshot", 00:05:50.278 "bdev_lvol_create", 00:05:50.278 "bdev_lvol_delete_lvstore", 00:05:50.278 "bdev_lvol_rename_lvstore", 00:05:50.278 "bdev_lvol_create_lvstore", 00:05:50.278 "bdev_raid_set_options", 00:05:50.278 "bdev_raid_remove_base_bdev", 00:05:50.278 "bdev_raid_add_base_bdev", 00:05:50.278 "bdev_raid_delete", 00:05:50.278 "bdev_raid_create", 00:05:50.278 "bdev_raid_get_bdevs", 00:05:50.278 "bdev_error_inject_error", 00:05:50.278 "bdev_error_delete", 00:05:50.278 "bdev_error_create", 00:05:50.278 "bdev_split_delete", 00:05:50.278 "bdev_split_create", 00:05:50.278 "bdev_delay_delete", 00:05:50.278 "bdev_delay_create", 00:05:50.278 "bdev_delay_update_latency", 00:05:50.278 "bdev_zone_block_delete", 00:05:50.278 "bdev_zone_block_create", 00:05:50.278 "blobfs_create", 00:05:50.278 "blobfs_detect", 00:05:50.278 "blobfs_set_cache_size", 00:05:50.278 "bdev_aio_delete", 00:05:50.278 "bdev_aio_rescan", 00:05:50.278 "bdev_aio_create", 00:05:50.278 "bdev_ftl_set_property", 00:05:50.278 "bdev_ftl_get_properties", 00:05:50.278 "bdev_ftl_get_stats", 00:05:50.278 "bdev_ftl_unmap", 00:05:50.278 "bdev_ftl_unload", 00:05:50.278 "bdev_ftl_delete", 00:05:50.278 "bdev_ftl_load", 00:05:50.278 "bdev_ftl_create", 00:05:50.278 "bdev_virtio_attach_controller", 00:05:50.278 "bdev_virtio_scsi_get_devices", 00:05:50.278 "bdev_virtio_detach_controller", 00:05:50.278 "bdev_virtio_blk_set_hotplug", 00:05:50.278 "bdev_iscsi_delete", 00:05:50.278 "bdev_iscsi_create", 00:05:50.278 "bdev_iscsi_set_options", 00:05:50.278 "bdev_uring_delete", 00:05:50.278 "bdev_uring_create", 00:05:50.278 "accel_error_inject_error", 00:05:50.278 "ioat_scan_accel_module", 00:05:50.278 "dsa_scan_accel_module", 00:05:50.278 "iaa_scan_accel_module", 00:05:50.278 "iscsi_set_options", 00:05:50.278 "iscsi_get_auth_groups", 00:05:50.278 "iscsi_auth_group_remove_secret", 00:05:50.278 "iscsi_auth_group_add_secret", 00:05:50.278 "iscsi_delete_auth_group", 00:05:50.278 "iscsi_create_auth_group", 00:05:50.278 "iscsi_set_discovery_auth", 00:05:50.278 "iscsi_get_options", 00:05:50.278 "iscsi_target_node_request_logout", 00:05:50.278 "iscsi_target_node_set_redirect", 00:05:50.278 "iscsi_target_node_set_auth", 00:05:50.278 "iscsi_target_node_add_lun", 00:05:50.278 "iscsi_get_connections", 00:05:50.278 "iscsi_portal_group_set_auth", 00:05:50.278 "iscsi_start_portal_group", 00:05:50.278 "iscsi_delete_portal_group", 00:05:50.278 "iscsi_create_portal_group", 00:05:50.278 "iscsi_get_portal_groups", 00:05:50.278 "iscsi_delete_target_node", 00:05:50.278 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.278 "iscsi_target_node_add_pg_ig_maps", 00:05:50.278 "iscsi_create_target_node", 00:05:50.278 "iscsi_get_target_nodes", 00:05:50.278 "iscsi_delete_initiator_group", 00:05:50.278 "iscsi_initiator_group_remove_initiators", 00:05:50.278 "iscsi_initiator_group_add_initiators", 00:05:50.278 "iscsi_create_initiator_group", 00:05:50.278 "iscsi_get_initiator_groups", 00:05:50.278 "nvmf_set_crdt", 00:05:50.278 "nvmf_set_config", 00:05:50.278 "nvmf_set_max_subsystems", 00:05:50.278 "nvmf_subsystem_get_listeners", 00:05:50.278 "nvmf_subsystem_get_qpairs", 00:05:50.278 "nvmf_subsystem_get_controllers", 00:05:50.278 "nvmf_get_stats", 00:05:50.278 "nvmf_get_transports", 00:05:50.278 "nvmf_create_transport", 00:05:50.278 "nvmf_get_targets", 00:05:50.278 "nvmf_delete_target", 00:05:50.278 "nvmf_create_target", 00:05:50.278 "nvmf_subsystem_allow_any_host", 00:05:50.278 "nvmf_subsystem_remove_host", 00:05:50.278 "nvmf_subsystem_add_host", 00:05:50.278 "nvmf_subsystem_remove_ns", 00:05:50.278 "nvmf_subsystem_add_ns", 00:05:50.278 "nvmf_subsystem_listener_set_ana_state", 00:05:50.278 "nvmf_discovery_get_referrals", 00:05:50.278 "nvmf_discovery_remove_referral", 00:05:50.278 "nvmf_discovery_add_referral", 00:05:50.278 "nvmf_subsystem_remove_listener", 00:05:50.279 "nvmf_subsystem_add_listener", 00:05:50.279 "nvmf_delete_subsystem", 00:05:50.279 "nvmf_create_subsystem", 00:05:50.279 "nvmf_get_subsystems", 00:05:50.279 "env_dpdk_get_mem_stats", 00:05:50.279 "nbd_get_disks", 00:05:50.279 "nbd_stop_disk", 00:05:50.279 "nbd_start_disk", 00:05:50.279 "ublk_recover_disk", 00:05:50.279 "ublk_get_disks", 00:05:50.279 "ublk_stop_disk", 00:05:50.279 "ublk_start_disk", 00:05:50.279 "ublk_destroy_target", 00:05:50.279 "ublk_create_target", 00:05:50.279 "virtio_blk_create_transport", 00:05:50.279 "virtio_blk_get_transports", 00:05:50.279 "vhost_controller_set_coalescing", 00:05:50.279 "vhost_get_controllers", 00:05:50.279 "vhost_delete_controller", 00:05:50.279 "vhost_create_blk_controller", 00:05:50.279 "vhost_scsi_controller_remove_target", 00:05:50.279 "vhost_scsi_controller_add_target", 00:05:50.279 "vhost_start_scsi_controller", 00:05:50.279 "vhost_create_scsi_controller", 00:05:50.279 "thread_set_cpumask", 00:05:50.279 "framework_get_scheduler", 00:05:50.279 "framework_set_scheduler", 00:05:50.279 "framework_get_reactors", 00:05:50.279 "thread_get_io_channels", 00:05:50.279 "thread_get_pollers", 00:05:50.279 "thread_get_stats", 00:05:50.279 "framework_monitor_context_switch", 00:05:50.279 "spdk_kill_instance", 00:05:50.279 "log_enable_timestamps", 00:05:50.279 "log_get_flags", 00:05:50.279 "log_clear_flag", 00:05:50.279 "log_set_flag", 00:05:50.279 "log_get_level", 00:05:50.279 "log_set_level", 00:05:50.279 "log_get_print_level", 00:05:50.279 "log_set_print_level", 00:05:50.279 "framework_enable_cpumask_locks", 00:05:50.279 "framework_disable_cpumask_locks", 00:05:50.279 "framework_wait_init", 00:05:50.279 "framework_start_init", 00:05:50.279 "scsi_get_devices", 00:05:50.279 "bdev_get_histogram", 00:05:50.279 "bdev_enable_histogram", 00:05:50.279 "bdev_set_qos_limit", 00:05:50.279 "bdev_set_qd_sampling_period", 00:05:50.279 "bdev_get_bdevs", 00:05:50.279 "bdev_reset_iostat", 00:05:50.279 "bdev_get_iostat", 00:05:50.279 "bdev_examine", 00:05:50.279 "bdev_wait_for_examine", 00:05:50.279 "bdev_set_options", 00:05:50.279 "notify_get_notifications", 00:05:50.279 "notify_get_types", 00:05:50.279 "accel_get_stats", 00:05:50.279 "accel_set_options", 00:05:50.279 "accel_set_driver", 00:05:50.279 "accel_crypto_key_destroy", 00:05:50.279 "accel_crypto_keys_get", 00:05:50.279 "accel_crypto_key_create", 00:05:50.279 "accel_assign_opc", 00:05:50.279 "accel_get_module_info", 00:05:50.279 "accel_get_opc_assignments", 00:05:50.279 "vmd_rescan", 00:05:50.279 "vmd_remove_device", 00:05:50.279 "vmd_enable", 00:05:50.279 "sock_set_default_impl", 00:05:50.279 "sock_impl_set_options", 00:05:50.279 "sock_impl_get_options", 00:05:50.279 "iobuf_get_stats", 00:05:50.279 "iobuf_set_options", 00:05:50.279 "framework_get_pci_devices", 00:05:50.279 "framework_get_config", 00:05:50.279 "framework_get_subsystems", 00:05:50.279 "trace_get_info", 00:05:50.279 "trace_get_tpoint_group_mask", 00:05:50.279 "trace_disable_tpoint_group", 00:05:50.279 "trace_enable_tpoint_group", 00:05:50.279 "trace_clear_tpoint_mask", 00:05:50.279 "trace_set_tpoint_mask", 00:05:50.279 "spdk_get_version", 00:05:50.279 "rpc_get_methods" 00:05:50.279 ] 00:05:50.279 10:11:03 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.279 10:11:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.279 10:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.538 10:11:03 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.538 10:11:03 -- spdkcli/tcp.sh@38 -- # killprocess 66142 00:05:50.538 10:11:03 -- common/autotest_common.sh@926 -- # '[' -z 66142 ']' 00:05:50.538 10:11:03 -- common/autotest_common.sh@930 -- # kill -0 66142 00:05:50.538 10:11:03 -- common/autotest_common.sh@931 -- # uname 00:05:50.538 10:11:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:50.538 10:11:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66142 00:05:50.538 killing process with pid 66142 00:05:50.538 10:11:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:50.538 10:11:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:50.538 10:11:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66142' 00:05:50.538 10:11:03 -- common/autotest_common.sh@945 -- # kill 66142 00:05:50.538 10:11:03 -- common/autotest_common.sh@950 -- # wait 66142 00:05:50.831 ************************************ 00:05:50.831 END TEST spdkcli_tcp 00:05:50.831 ************************************ 00:05:50.831 00:05:50.831 real 0m1.744s 00:05:50.831 user 0m3.229s 00:05:50.831 sys 0m0.449s 00:05:50.831 10:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.831 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.831 10:11:04 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.831 10:11:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.831 10:11:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.831 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.831 ************************************ 00:05:50.831 START TEST dpdk_mem_utility 00:05:50.831 ************************************ 00:05:50.831 10:11:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.831 * Looking for test storage... 00:05:50.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:50.831 10:11:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.831 10:11:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66221 00:05:50.831 10:11:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66221 00:05:50.831 10:11:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.831 10:11:04 -- common/autotest_common.sh@819 -- # '[' -z 66221 ']' 00:05:50.831 10:11:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.831 10:11:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.831 10:11:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.831 10:11:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.831 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:51.090 [2024-07-26 10:11:04.318866] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:51.090 [2024-07-26 10:11:04.318959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66221 ] 00:05:51.090 [2024-07-26 10:11:04.453815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.090 [2024-07-26 10:11:04.546397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.090 [2024-07-26 10:11:04.546844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.028 10:11:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.028 10:11:05 -- common/autotest_common.sh@852 -- # return 0 00:05:52.028 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.028 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.028 10:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.028 10:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.028 { 00:05:52.028 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.028 } 00:05:52.028 10:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.028 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:52.028 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:52.028 1 heaps totaling size 814.000000 MiB 00:05:52.028 size: 814.000000 MiB heap id: 0 00:05:52.028 end heaps---------- 00:05:52.029 8 mempools totaling size 598.116089 MiB 00:05:52.029 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.029 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.029 size: 84.521057 MiB name: bdev_io_66221 00:05:52.029 size: 51.011292 MiB name: evtpool_66221 00:05:52.029 size: 50.003479 MiB name: msgpool_66221 00:05:52.029 size: 21.763794 MiB name: PDU_Pool 00:05:52.029 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.029 size: 0.026123 MiB name: Session_Pool 00:05:52.029 end mempools------- 00:05:52.029 6 memzones totaling size 4.142822 MiB 00:05:52.029 size: 1.000366 MiB name: RG_ring_0_66221 00:05:52.029 size: 1.000366 MiB name: RG_ring_1_66221 00:05:52.029 size: 1.000366 MiB name: RG_ring_4_66221 00:05:52.029 size: 1.000366 MiB name: RG_ring_5_66221 00:05:52.029 size: 0.125366 MiB name: RG_ring_2_66221 00:05:52.029 size: 0.015991 MiB name: RG_ring_3_66221 00:05:52.029 end memzones------- 00:05:52.029 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.029 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:05:52.029 list of free elements. size: 12.472290 MiB 00:05:52.029 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:52.029 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:52.029 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:52.029 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:52.029 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:52.029 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:52.029 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:52.029 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:52.029 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:52.029 element at address: 0x20001aa00000 with size: 0.569336 MiB 00:05:52.029 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:52.029 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:52.029 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:52.029 element at address: 0x200027e00000 with size: 0.396484 MiB 00:05:52.029 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:52.029 list of standard malloc elements. size: 199.265137 MiB 00:05:52.029 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:52.029 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:52.029 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:52.029 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:52.029 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.029 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.029 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:52.029 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.029 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:52.029 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:52.029 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:52.029 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:52.030 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:52.031 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e65800 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e658c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c4c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:52.031 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:52.031 list of memzone associated elements. size: 602.262573 MiB 00:05:52.031 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:52.031 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.031 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:52.031 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.031 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:52.031 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66221_0 00:05:52.031 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:52.031 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66221_0 00:05:52.031 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:52.031 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66221_0 00:05:52.031 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:52.031 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.031 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:52.031 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.031 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:52.031 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66221 00:05:52.031 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:52.031 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66221 00:05:52.031 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.031 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66221 00:05:52.031 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:52.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.031 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:52.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.031 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:52.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.031 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:52.031 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.031 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:52.031 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66221 00:05:52.031 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:52.031 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66221 00:05:52.031 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:52.031 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66221 00:05:52.031 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:52.031 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66221 00:05:52.031 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:52.031 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66221 00:05:52.031 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:52.031 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.032 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:52.032 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.032 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:52.032 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.032 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:52.032 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66221 00:05:52.032 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:52.032 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.032 element at address: 0x200027e65980 with size: 0.023743 MiB 00:05:52.032 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.032 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:52.032 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66221 00:05:52.032 element at address: 0x200027e6bac0 with size: 0.002441 MiB 00:05:52.032 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.032 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:52.032 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66221 00:05:52.032 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:52.032 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66221 00:05:52.032 element at address: 0x200027e6c580 with size: 0.000305 MiB 00:05:52.032 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.032 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.032 10:11:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66221 00:05:52.032 10:11:05 -- common/autotest_common.sh@926 -- # '[' -z 66221 ']' 00:05:52.032 10:11:05 -- common/autotest_common.sh@930 -- # kill -0 66221 00:05:52.032 10:11:05 -- common/autotest_common.sh@931 -- # uname 00:05:52.032 10:11:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.032 10:11:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66221 00:05:52.032 killing process with pid 66221 00:05:52.032 10:11:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.032 10:11:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.032 10:11:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66221' 00:05:52.032 10:11:05 -- common/autotest_common.sh@945 -- # kill 66221 00:05:52.032 10:11:05 -- common/autotest_common.sh@950 -- # wait 66221 00:05:52.599 00:05:52.599 real 0m1.606s 00:05:52.599 user 0m1.710s 00:05:52.599 sys 0m0.425s 00:05:52.599 10:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.599 ************************************ 00:05:52.599 END TEST dpdk_mem_utility 00:05:52.599 ************************************ 00:05:52.599 10:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.599 10:11:05 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.599 10:11:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.599 10:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.599 10:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.599 ************************************ 00:05:52.599 START TEST event 00:05:52.599 ************************************ 00:05:52.599 10:11:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.599 * Looking for test storage... 00:05:52.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:52.599 10:11:05 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:52.599 10:11:05 -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.599 10:11:05 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.599 10:11:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:52.599 10:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.599 10:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.599 ************************************ 00:05:52.599 START TEST event_perf 00:05:52.599 ************************************ 00:05:52.599 10:11:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.599 Running I/O for 1 seconds...[2024-07-26 10:11:05.945894] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:52.599 [2024-07-26 10:11:05.945989] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66297 ] 00:05:52.859 [2024-07-26 10:11:06.082748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.859 [2024-07-26 10:11:06.176180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.859 [2024-07-26 10:11:06.176292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.859 [2024-07-26 10:11:06.176372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.859 [2024-07-26 10:11:06.176371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.796 Running I/O for 1 seconds... 00:05:53.796 lcore 0: 190610 00:05:53.796 lcore 1: 190610 00:05:53.796 lcore 2: 190610 00:05:53.796 lcore 3: 190611 00:05:53.796 done. 00:05:53.796 00:05:53.796 real 0m1.322s 00:05:53.796 user 0m4.137s 00:05:53.796 sys 0m0.063s 00:05:53.796 10:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.796 10:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 ************************************ 00:05:53.796 END TEST event_perf 00:05:54.056 ************************************ 00:05:54.056 10:11:07 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.056 10:11:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:54.056 10:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.056 10:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:54.056 ************************************ 00:05:54.056 START TEST event_reactor 00:05:54.056 ************************************ 00:05:54.056 10:11:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.056 [2024-07-26 10:11:07.322877] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:54.056 [2024-07-26 10:11:07.323172] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66330 ] 00:05:54.056 [2024-07-26 10:11:07.455596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.314 [2024-07-26 10:11:07.549346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.253 test_start 00:05:55.253 oneshot 00:05:55.253 tick 100 00:05:55.253 tick 100 00:05:55.253 tick 250 00:05:55.253 tick 100 00:05:55.253 tick 100 00:05:55.253 tick 250 00:05:55.253 tick 500 00:05:55.253 tick 100 00:05:55.253 tick 100 00:05:55.253 tick 100 00:05:55.253 tick 250 00:05:55.253 tick 100 00:05:55.253 tick 100 00:05:55.253 test_end 00:05:55.253 ************************************ 00:05:55.253 END TEST event_reactor 00:05:55.253 ************************************ 00:05:55.253 00:05:55.253 real 0m1.312s 00:05:55.253 user 0m1.152s 00:05:55.253 sys 0m0.053s 00:05:55.253 10:11:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.253 10:11:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.253 10:11:08 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.253 10:11:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:55.253 10:11:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.253 10:11:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.253 ************************************ 00:05:55.253 START TEST event_reactor_perf 00:05:55.253 ************************************ 00:05:55.253 10:11:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.253 [2024-07-26 10:11:08.689842] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:55.253 [2024-07-26 10:11:08.689935] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66371 ] 00:05:55.512 [2024-07-26 10:11:08.824089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.512 [2024-07-26 10:11:08.909386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.927 test_start 00:05:56.927 test_end 00:05:56.927 Performance: 378484 events per second 00:05:56.927 ************************************ 00:05:56.927 END TEST event_reactor_perf 00:05:56.927 ************************************ 00:05:56.927 00:05:56.927 real 0m1.308s 00:05:56.927 user 0m1.147s 00:05:56.927 sys 0m0.055s 00:05:56.927 10:11:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.927 10:11:09 -- common/autotest_common.sh@10 -- # set +x 00:05:56.927 10:11:10 -- event/event.sh@49 -- # uname -s 00:05:56.927 10:11:10 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.927 10:11:10 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.927 10:11:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.927 10:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.927 10:11:10 -- common/autotest_common.sh@10 -- # set +x 00:05:56.927 ************************************ 00:05:56.928 START TEST event_scheduler 00:05:56.928 ************************************ 00:05:56.928 10:11:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.928 * Looking for test storage... 00:05:56.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:56.928 10:11:10 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.928 10:11:10 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66426 00:05:56.928 10:11:10 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.928 10:11:10 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.928 10:11:10 -- scheduler/scheduler.sh@37 -- # waitforlisten 66426 00:05:56.928 10:11:10 -- common/autotest_common.sh@819 -- # '[' -z 66426 ']' 00:05:56.928 10:11:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.928 10:11:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.928 10:11:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.928 10:11:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.928 10:11:10 -- common/autotest_common.sh@10 -- # set +x 00:05:56.928 [2024-07-26 10:11:10.155650] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:56.928 [2024-07-26 10:11:10.155946] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66426 ] 00:05:56.928 [2024-07-26 10:11:10.290418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.187 [2024-07-26 10:11:10.397527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.187 [2024-07-26 10:11:10.397685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.187 [2024-07-26 10:11:10.397803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.187 [2024-07-26 10:11:10.397810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.754 10:11:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.754 10:11:11 -- common/autotest_common.sh@852 -- # return 0 00:05:57.754 10:11:11 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.754 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.754 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 POWER: Env isn't set yet! 00:05:57.754 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:57.754 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.754 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.754 POWER: Attempting to initialise PSTAT power management... 00:05:57.754 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.754 POWER: Cannot set governor of lcore 0 to performance 00:05:57.754 POWER: Attempting to initialise CPPC power management... 00:05:57.754 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.754 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.754 POWER: Attempting to initialise VM power management... 00:05:57.754 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.754 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.754 [2024-07-26 10:11:11.063952] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.754 [2024-07-26 10:11:11.063966] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:57.754 [2024-07-26 10:11:11.063975] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.754 [2024-07-26 10:11:11.063988] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.754 [2024-07-26 10:11:11.063997] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.754 [2024-07-26 10:11:11.064004] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.754 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.754 10:11:11 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.754 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.754 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 [2024-07-26 10:11:11.154663] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.754 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.754 10:11:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.754 10:11:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.754 10:11:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.754 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 ************************************ 00:05:57.754 START TEST scheduler_create_thread 00:05:57.754 ************************************ 00:05:57.754 10:11:11 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:57.754 10:11:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.754 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.754 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 2 00:05:57.754 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.754 10:11:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.754 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.755 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.755 3 00:05:57.755 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.755 10:11:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.755 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.755 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.755 4 00:05:57.755 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.755 10:11:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.755 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.755 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.755 5 00:05:57.755 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.755 10:11:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.755 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.755 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.755 6 00:05:57.755 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.755 10:11:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.755 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.755 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 7 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 8 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 9 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 10 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.013 10:11:11 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.013 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.013 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.581 10:11:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.581 10:11:11 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.581 10:11:11 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.581 10:11:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.581 10:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.519 ************************************ 00:05:59.519 END TEST scheduler_create_thread 00:05:59.519 ************************************ 00:05:59.519 10:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.519 00:05:59.519 real 0m1.749s 00:05:59.519 user 0m0.016s 00:05:59.519 sys 0m0.005s 00:05:59.519 10:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.519 10:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.519 10:11:12 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.519 10:11:12 -- scheduler/scheduler.sh@46 -- # killprocess 66426 00:05:59.519 10:11:12 -- common/autotest_common.sh@926 -- # '[' -z 66426 ']' 00:05:59.519 10:11:12 -- common/autotest_common.sh@930 -- # kill -0 66426 00:05:59.519 10:11:12 -- common/autotest_common.sh@931 -- # uname 00:05:59.519 10:11:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.519 10:11:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66426 00:05:59.778 killing process with pid 66426 00:05:59.778 10:11:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:59.778 10:11:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:59.778 10:11:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66426' 00:05:59.778 10:11:12 -- common/autotest_common.sh@945 -- # kill 66426 00:05:59.778 10:11:12 -- common/autotest_common.sh@950 -- # wait 66426 00:06:00.036 [2024-07-26 10:11:13.393492] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.294 ************************************ 00:06:00.294 END TEST event_scheduler 00:06:00.294 ************************************ 00:06:00.294 00:06:00.294 real 0m3.567s 00:06:00.294 user 0m6.239s 00:06:00.294 sys 0m0.365s 00:06:00.294 10:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.294 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 10:11:13 -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.294 10:11:13 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.294 10:11:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.294 10:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.294 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 ************************************ 00:06:00.294 START TEST app_repeat 00:06:00.294 ************************************ 00:06:00.294 10:11:13 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:00.294 10:11:13 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.294 10:11:13 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.294 10:11:13 -- event/event.sh@13 -- # local nbd_list 00:06:00.294 10:11:13 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.294 10:11:13 -- event/event.sh@14 -- # local bdev_list 00:06:00.294 10:11:13 -- event/event.sh@15 -- # local repeat_times=4 00:06:00.294 10:11:13 -- event/event.sh@17 -- # modprobe nbd 00:06:00.294 10:11:13 -- event/event.sh@19 -- # repeat_pid=66515 00:06:00.294 10:11:13 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.294 Process app_repeat pid: 66515 00:06:00.294 spdk_app_start Round 0 00:06:00.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.294 10:11:13 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66515' 00:06:00.294 10:11:13 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.294 10:11:13 -- event/event.sh@23 -- # for i in {0..2} 00:06:00.294 10:11:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.294 10:11:13 -- event/event.sh@25 -- # waitforlisten 66515 /var/tmp/spdk-nbd.sock 00:06:00.294 10:11:13 -- common/autotest_common.sh@819 -- # '[' -z 66515 ']' 00:06:00.294 10:11:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.294 10:11:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.294 10:11:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.294 10:11:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.294 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 [2024-07-26 10:11:13.673136] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:00.294 [2024-07-26 10:11:13.673222] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66515 ] 00:06:00.553 [2024-07-26 10:11:13.805952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.553 [2024-07-26 10:11:13.899883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.553 [2024-07-26 10:11:13.899895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.490 10:11:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.490 10:11:14 -- common/autotest_common.sh@852 -- # return 0 00:06:01.490 10:11:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.490 Malloc0 00:06:01.490 10:11:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.763 Malloc1 00:06:01.763 10:11:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@12 -- # local i 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.763 10:11:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.043 /dev/nbd0 00:06:02.043 10:11:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.043 10:11:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.043 10:11:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:02.043 10:11:15 -- common/autotest_common.sh@857 -- # local i 00:06:02.043 10:11:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:02.043 10:11:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:02.043 10:11:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:02.043 10:11:15 -- common/autotest_common.sh@861 -- # break 00:06:02.043 10:11:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:02.043 10:11:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:02.043 10:11:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.043 1+0 records in 00:06:02.043 1+0 records out 00:06:02.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191437 s, 21.4 MB/s 00:06:02.043 10:11:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.043 10:11:15 -- common/autotest_common.sh@874 -- # size=4096 00:06:02.043 10:11:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.043 10:11:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:02.043 10:11:15 -- common/autotest_common.sh@877 -- # return 0 00:06:02.043 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.043 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.043 10:11:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.303 /dev/nbd1 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.303 10:11:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:02.303 10:11:15 -- common/autotest_common.sh@857 -- # local i 00:06:02.303 10:11:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:02.303 10:11:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:02.303 10:11:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:02.303 10:11:15 -- common/autotest_common.sh@861 -- # break 00:06:02.303 10:11:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:02.303 10:11:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:02.303 10:11:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.303 1+0 records in 00:06:02.303 1+0 records out 00:06:02.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642759 s, 6.4 MB/s 00:06:02.303 10:11:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.303 10:11:15 -- common/autotest_common.sh@874 -- # size=4096 00:06:02.303 10:11:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.303 10:11:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:02.303 10:11:15 -- common/autotest_common.sh@877 -- # return 0 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.303 10:11:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.562 10:11:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.562 { 00:06:02.562 "nbd_device": "/dev/nbd0", 00:06:02.562 "bdev_name": "Malloc0" 00:06:02.562 }, 00:06:02.562 { 00:06:02.562 "nbd_device": "/dev/nbd1", 00:06:02.562 "bdev_name": "Malloc1" 00:06:02.562 } 00:06:02.562 ]' 00:06:02.562 10:11:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.562 { 00:06:02.562 "nbd_device": "/dev/nbd0", 00:06:02.562 "bdev_name": "Malloc0" 00:06:02.562 }, 00:06:02.562 { 00:06:02.562 "nbd_device": "/dev/nbd1", 00:06:02.562 "bdev_name": "Malloc1" 00:06:02.562 } 00:06:02.562 ]' 00:06:02.562 10:11:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.821 /dev/nbd1' 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.821 /dev/nbd1' 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.821 256+0 records in 00:06:02.821 256+0 records out 00:06:02.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105579 s, 99.3 MB/s 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.821 256+0 records in 00:06:02.821 256+0 records out 00:06:02.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254457 s, 41.2 MB/s 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.821 256+0 records in 00:06:02.821 256+0 records out 00:06:02.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02539 s, 41.3 MB/s 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.821 10:11:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.822 10:11:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@41 -- # break 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.081 10:11:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@41 -- # break 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.340 10:11:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@65 -- # true 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.599 10:11:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.599 10:11:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.858 10:11:17 -- event/event.sh@35 -- # sleep 3 00:06:04.117 [2024-07-26 10:11:17.360092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.117 [2024-07-26 10:11:17.429375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.117 [2024-07-26 10:11:17.429387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.117 [2024-07-26 10:11:17.484818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.117 [2024-07-26 10:11:17.484885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.405 spdk_app_start Round 1 00:06:07.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.405 10:11:20 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.405 10:11:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.405 10:11:20 -- event/event.sh@25 -- # waitforlisten 66515 /var/tmp/spdk-nbd.sock 00:06:07.405 10:11:20 -- common/autotest_common.sh@819 -- # '[' -z 66515 ']' 00:06:07.405 10:11:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.405 10:11:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.405 10:11:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.405 10:11:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.405 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:07.405 10:11:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.405 10:11:20 -- common/autotest_common.sh@852 -- # return 0 00:06:07.405 10:11:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.405 Malloc0 00:06:07.405 10:11:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.664 Malloc1 00:06:07.664 10:11:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@12 -- # local i 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.664 10:11:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.923 /dev/nbd0 00:06:07.923 10:11:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.923 10:11:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.923 10:11:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:07.923 10:11:21 -- common/autotest_common.sh@857 -- # local i 00:06:07.923 10:11:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:07.923 10:11:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:07.923 10:11:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:07.923 10:11:21 -- common/autotest_common.sh@861 -- # break 00:06:07.923 10:11:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:07.923 10:11:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:07.923 10:11:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.923 1+0 records in 00:06:07.923 1+0 records out 00:06:07.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420935 s, 9.7 MB/s 00:06:07.924 10:11:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.924 10:11:21 -- common/autotest_common.sh@874 -- # size=4096 00:06:07.924 10:11:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.924 10:11:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:07.924 10:11:21 -- common/autotest_common.sh@877 -- # return 0 00:06:07.924 10:11:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.924 10:11:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.924 10:11:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.182 /dev/nbd1 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.182 10:11:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:08.182 10:11:21 -- common/autotest_common.sh@857 -- # local i 00:06:08.182 10:11:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.182 10:11:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.182 10:11:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:08.182 10:11:21 -- common/autotest_common.sh@861 -- # break 00:06:08.182 10:11:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.182 10:11:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.182 10:11:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.182 1+0 records in 00:06:08.182 1+0 records out 00:06:08.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553072 s, 7.4 MB/s 00:06:08.182 10:11:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.182 10:11:21 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.182 10:11:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.182 10:11:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.182 10:11:21 -- common/autotest_common.sh@877 -- # return 0 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.182 10:11:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.441 { 00:06:08.441 "nbd_device": "/dev/nbd0", 00:06:08.441 "bdev_name": "Malloc0" 00:06:08.441 }, 00:06:08.441 { 00:06:08.441 "nbd_device": "/dev/nbd1", 00:06:08.441 "bdev_name": "Malloc1" 00:06:08.441 } 00:06:08.441 ]' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.441 { 00:06:08.441 "nbd_device": "/dev/nbd0", 00:06:08.441 "bdev_name": "Malloc0" 00:06:08.441 }, 00:06:08.441 { 00:06:08.441 "nbd_device": "/dev/nbd1", 00:06:08.441 "bdev_name": "Malloc1" 00:06:08.441 } 00:06:08.441 ]' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.441 /dev/nbd1' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.441 /dev/nbd1' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.441 256+0 records in 00:06:08.441 256+0 records out 00:06:08.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105364 s, 99.5 MB/s 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.441 10:11:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.442 256+0 records in 00:06:08.442 256+0 records out 00:06:08.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244119 s, 43.0 MB/s 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.442 256+0 records in 00:06:08.442 256+0 records out 00:06:08.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292136 s, 35.9 MB/s 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.442 10:11:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@41 -- # break 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.700 10:11:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@41 -- # break 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.988 10:11:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.246 10:11:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.246 10:11:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.246 10:11:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@65 -- # true 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.505 10:11:22 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.505 10:11:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.763 10:11:23 -- event/event.sh@35 -- # sleep 3 00:06:10.021 [2024-07-26 10:11:23.224128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.021 [2024-07-26 10:11:23.312314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.021 [2024-07-26 10:11:23.312323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.021 [2024-07-26 10:11:23.372721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.021 [2024-07-26 10:11:23.372787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.308 10:11:26 -- event/event.sh@23 -- # for i in {0..2} 00:06:13.308 spdk_app_start Round 2 00:06:13.308 10:11:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.308 10:11:26 -- event/event.sh@25 -- # waitforlisten 66515 /var/tmp/spdk-nbd.sock 00:06:13.309 10:11:26 -- common/autotest_common.sh@819 -- # '[' -z 66515 ']' 00:06:13.309 10:11:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.309 10:11:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.309 10:11:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.309 10:11:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.309 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.309 10:11:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.309 10:11:26 -- common/autotest_common.sh@852 -- # return 0 00:06:13.309 10:11:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.309 Malloc0 00:06:13.309 10:11:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.566 Malloc1 00:06:13.566 10:11:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.566 10:11:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.824 /dev/nbd0 00:06:13.824 10:11:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.824 10:11:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.824 10:11:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:13.824 10:11:27 -- common/autotest_common.sh@857 -- # local i 00:06:13.824 10:11:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:13.824 10:11:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:13.824 10:11:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:13.824 10:11:27 -- common/autotest_common.sh@861 -- # break 00:06:13.824 10:11:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:13.824 10:11:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:13.824 10:11:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.824 1+0 records in 00:06:13.824 1+0 records out 00:06:13.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362369 s, 11.3 MB/s 00:06:13.824 10:11:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.824 10:11:27 -- common/autotest_common.sh@874 -- # size=4096 00:06:13.824 10:11:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.824 10:11:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:13.824 10:11:27 -- common/autotest_common.sh@877 -- # return 0 00:06:13.824 10:11:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.824 10:11:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.824 10:11:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.082 /dev/nbd1 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.082 10:11:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.082 10:11:27 -- common/autotest_common.sh@857 -- # local i 00:06:14.082 10:11:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.082 10:11:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.082 10:11:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.082 10:11:27 -- common/autotest_common.sh@861 -- # break 00:06:14.082 10:11:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.082 10:11:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.082 10:11:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.082 1+0 records in 00:06:14.082 1+0 records out 00:06:14.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450258 s, 9.1 MB/s 00:06:14.082 10:11:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.082 10:11:27 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.082 10:11:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.082 10:11:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.082 10:11:27 -- common/autotest_common.sh@877 -- # return 0 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.082 10:11:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.339 { 00:06:14.339 "nbd_device": "/dev/nbd0", 00:06:14.339 "bdev_name": "Malloc0" 00:06:14.339 }, 00:06:14.339 { 00:06:14.339 "nbd_device": "/dev/nbd1", 00:06:14.339 "bdev_name": "Malloc1" 00:06:14.339 } 00:06:14.339 ]' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.339 { 00:06:14.339 "nbd_device": "/dev/nbd0", 00:06:14.339 "bdev_name": "Malloc0" 00:06:14.339 }, 00:06:14.339 { 00:06:14.339 "nbd_device": "/dev/nbd1", 00:06:14.339 "bdev_name": "Malloc1" 00:06:14.339 } 00:06:14.339 ]' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.339 /dev/nbd1' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.339 /dev/nbd1' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.339 256+0 records in 00:06:14.339 256+0 records out 00:06:14.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650069 s, 161 MB/s 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.339 256+0 records in 00:06:14.339 256+0 records out 00:06:14.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030843 s, 34.0 MB/s 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.339 256+0 records in 00:06:14.339 256+0 records out 00:06:14.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301386 s, 34.8 MB/s 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.339 10:11:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@41 -- # break 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.596 10:11:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@41 -- # break 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.854 10:11:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.112 10:11:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.112 10:11:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.112 10:11:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@65 -- # true 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.370 10:11:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.370 10:11:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.370 10:11:28 -- event/event.sh@35 -- # sleep 3 00:06:15.628 [2024-07-26 10:11:29.007897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.886 [2024-07-26 10:11:29.087299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.886 [2024-07-26 10:11:29.087309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.886 [2024-07-26 10:11:29.143951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.886 [2024-07-26 10:11:29.144033] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.445 10:11:31 -- event/event.sh@38 -- # waitforlisten 66515 /var/tmp/spdk-nbd.sock 00:06:18.445 10:11:31 -- common/autotest_common.sh@819 -- # '[' -z 66515 ']' 00:06:18.445 10:11:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.445 10:11:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.445 10:11:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.445 10:11:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.445 10:11:31 -- common/autotest_common.sh@10 -- # set +x 00:06:18.701 10:11:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.701 10:11:32 -- common/autotest_common.sh@852 -- # return 0 00:06:18.701 10:11:32 -- event/event.sh@39 -- # killprocess 66515 00:06:18.701 10:11:32 -- common/autotest_common.sh@926 -- # '[' -z 66515 ']' 00:06:18.701 10:11:32 -- common/autotest_common.sh@930 -- # kill -0 66515 00:06:18.701 10:11:32 -- common/autotest_common.sh@931 -- # uname 00:06:18.701 10:11:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.701 10:11:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66515 00:06:18.701 killing process with pid 66515 00:06:18.701 10:11:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.701 10:11:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.701 10:11:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66515' 00:06:18.701 10:11:32 -- common/autotest_common.sh@945 -- # kill 66515 00:06:18.701 10:11:32 -- common/autotest_common.sh@950 -- # wait 66515 00:06:18.959 spdk_app_start is called in Round 0. 00:06:18.959 Shutdown signal received, stop current app iteration 00:06:18.959 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:18.959 spdk_app_start is called in Round 1. 00:06:18.959 Shutdown signal received, stop current app iteration 00:06:18.959 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:18.959 spdk_app_start is called in Round 2. 00:06:18.959 Shutdown signal received, stop current app iteration 00:06:18.959 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:06:18.959 spdk_app_start is called in Round 3. 00:06:18.959 Shutdown signal received, stop current app iteration 00:06:18.959 ************************************ 00:06:18.959 END TEST app_repeat 00:06:18.959 ************************************ 00:06:18.959 10:11:32 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.959 10:11:32 -- event/event.sh@42 -- # return 0 00:06:18.959 00:06:18.959 real 0m18.672s 00:06:18.959 user 0m41.813s 00:06:18.959 sys 0m2.775s 00:06:18.959 10:11:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.959 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.959 10:11:32 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.959 10:11:32 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.959 10:11:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.959 10:11:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.959 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.959 ************************************ 00:06:18.959 START TEST cpu_locks 00:06:18.959 ************************************ 00:06:18.959 10:11:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.218 * Looking for test storage... 00:06:19.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:19.218 10:11:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:19.218 10:11:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:19.218 10:11:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:19.218 10:11:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:19.218 10:11:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.218 10:11:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.218 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.218 ************************************ 00:06:19.218 START TEST default_locks 00:06:19.218 ************************************ 00:06:19.218 10:11:32 -- common/autotest_common.sh@1104 -- # default_locks 00:06:19.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.218 10:11:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=66941 00:06:19.218 10:11:32 -- event/cpu_locks.sh@47 -- # waitforlisten 66941 00:06:19.218 10:11:32 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.218 10:11:32 -- common/autotest_common.sh@819 -- # '[' -z 66941 ']' 00:06:19.218 10:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.218 10:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.218 10:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.218 10:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.218 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.218 [2024-07-26 10:11:32.518687] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:19.218 [2024-07-26 10:11:32.518788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66941 ] 00:06:19.218 [2024-07-26 10:11:32.653009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.477 [2024-07-26 10:11:32.744951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.477 [2024-07-26 10:11:32.745135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.043 10:11:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.043 10:11:33 -- common/autotest_common.sh@852 -- # return 0 00:06:20.043 10:11:33 -- event/cpu_locks.sh@49 -- # locks_exist 66941 00:06:20.043 10:11:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.043 10:11:33 -- event/cpu_locks.sh@22 -- # lslocks -p 66941 00:06:20.610 10:11:33 -- event/cpu_locks.sh@50 -- # killprocess 66941 00:06:20.610 10:11:33 -- common/autotest_common.sh@926 -- # '[' -z 66941 ']' 00:06:20.610 10:11:33 -- common/autotest_common.sh@930 -- # kill -0 66941 00:06:20.610 10:11:33 -- common/autotest_common.sh@931 -- # uname 00:06:20.610 10:11:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.610 10:11:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66941 00:06:20.610 10:11:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.610 10:11:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.610 killing process with pid 66941 00:06:20.610 10:11:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66941' 00:06:20.610 10:11:33 -- common/autotest_common.sh@945 -- # kill 66941 00:06:20.610 10:11:33 -- common/autotest_common.sh@950 -- # wait 66941 00:06:21.177 10:11:34 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 66941 00:06:21.177 10:11:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:21.177 10:11:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 66941 00:06:21.177 10:11:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:21.177 10:11:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:21.177 10:11:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:21.177 10:11:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:21.177 10:11:34 -- common/autotest_common.sh@643 -- # waitforlisten 66941 00:06:21.177 10:11:34 -- common/autotest_common.sh@819 -- # '[' -z 66941 ']' 00:06:21.177 10:11:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.177 10:11:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.177 10:11:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.177 10:11:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.177 10:11:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.177 ERROR: process (pid: 66941) is no longer running 00:06:21.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (66941) - No such process 00:06:21.177 10:11:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.177 10:11:34 -- common/autotest_common.sh@852 -- # return 1 00:06:21.177 10:11:34 -- common/autotest_common.sh@643 -- # es=1 00:06:21.177 10:11:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:21.177 10:11:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:21.177 10:11:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:21.177 10:11:34 -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.177 10:11:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.177 10:11:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.177 10:11:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.177 00:06:21.177 real 0m2.000s 00:06:21.177 user 0m2.099s 00:06:21.178 sys 0m0.559s 00:06:21.178 ************************************ 00:06:21.178 10:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.178 10:11:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 END TEST default_locks 00:06:21.178 ************************************ 00:06:21.178 10:11:34 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.178 10:11:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.178 10:11:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.178 10:11:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 ************************************ 00:06:21.178 START TEST default_locks_via_rpc 00:06:21.178 ************************************ 00:06:21.178 10:11:34 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:21.178 10:11:34 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=66993 00:06:21.178 10:11:34 -- event/cpu_locks.sh@63 -- # waitforlisten 66993 00:06:21.178 10:11:34 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.178 10:11:34 -- common/autotest_common.sh@819 -- # '[' -z 66993 ']' 00:06:21.178 10:11:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.178 10:11:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.178 10:11:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.178 10:11:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.178 10:11:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 [2024-07-26 10:11:34.565411] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:21.178 [2024-07-26 10:11:34.565540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66993 ] 00:06:21.436 [2024-07-26 10:11:34.703464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.436 [2024-07-26 10:11:34.813964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.436 [2024-07-26 10:11:34.814173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.370 10:11:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.370 10:11:35 -- common/autotest_common.sh@852 -- # return 0 00:06:22.370 10:11:35 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:22.370 10:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.370 10:11:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.370 10:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:22.370 10:11:35 -- event/cpu_locks.sh@67 -- # no_locks 00:06:22.370 10:11:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.371 10:11:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.371 10:11:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.371 10:11:35 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.371 10:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.371 10:11:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.371 10:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:22.371 10:11:35 -- event/cpu_locks.sh@71 -- # locks_exist 66993 00:06:22.371 10:11:35 -- event/cpu_locks.sh@22 -- # lslocks -p 66993 00:06:22.371 10:11:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.629 10:11:35 -- event/cpu_locks.sh@73 -- # killprocess 66993 00:06:22.629 10:11:35 -- common/autotest_common.sh@926 -- # '[' -z 66993 ']' 00:06:22.629 10:11:35 -- common/autotest_common.sh@930 -- # kill -0 66993 00:06:22.629 10:11:35 -- common/autotest_common.sh@931 -- # uname 00:06:22.629 10:11:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.629 10:11:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66993 00:06:22.629 killing process with pid 66993 00:06:22.629 10:11:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.629 10:11:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.629 10:11:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66993' 00:06:22.629 10:11:36 -- common/autotest_common.sh@945 -- # kill 66993 00:06:22.629 10:11:36 -- common/autotest_common.sh@950 -- # wait 66993 00:06:23.232 ************************************ 00:06:23.232 END TEST default_locks_via_rpc 00:06:23.232 ************************************ 00:06:23.232 00:06:23.232 real 0m2.049s 00:06:23.232 user 0m2.077s 00:06:23.232 sys 0m0.693s 00:06:23.232 10:11:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.232 10:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.232 10:11:36 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:23.232 10:11:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.232 10:11:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.232 10:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.232 ************************************ 00:06:23.232 START TEST non_locking_app_on_locked_coremask 00:06:23.232 ************************************ 00:06:23.232 10:11:36 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:23.232 10:11:36 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67044 00:06:23.232 10:11:36 -- event/cpu_locks.sh@81 -- # waitforlisten 67044 /var/tmp/spdk.sock 00:06:23.232 10:11:36 -- common/autotest_common.sh@819 -- # '[' -z 67044 ']' 00:06:23.232 10:11:36 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.232 10:11:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.232 10:11:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.233 10:11:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.233 10:11:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.233 10:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.233 [2024-07-26 10:11:36.666939] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:23.233 [2024-07-26 10:11:36.667057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67044 ] 00:06:23.492 [2024-07-26 10:11:36.802061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.492 [2024-07-26 10:11:36.899386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.492 [2024-07-26 10:11:36.899616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.427 10:11:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.427 10:11:37 -- common/autotest_common.sh@852 -- # return 0 00:06:24.427 10:11:37 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:24.427 10:11:37 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67060 00:06:24.427 10:11:37 -- event/cpu_locks.sh@85 -- # waitforlisten 67060 /var/tmp/spdk2.sock 00:06:24.427 10:11:37 -- common/autotest_common.sh@819 -- # '[' -z 67060 ']' 00:06:24.427 10:11:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.427 10:11:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.427 10:11:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.427 10:11:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.427 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.427 [2024-07-26 10:11:37.680164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:24.427 [2024-07-26 10:11:37.680257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67060 ] 00:06:24.427 [2024-07-26 10:11:37.822892] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.427 [2024-07-26 10:11:37.822935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.685 [2024-07-26 10:11:38.078337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.685 [2024-07-26 10:11:38.078590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.060 10:11:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.060 10:11:39 -- common/autotest_common.sh@852 -- # return 0 00:06:26.060 10:11:39 -- event/cpu_locks.sh@87 -- # locks_exist 67044 00:06:26.061 10:11:39 -- event/cpu_locks.sh@22 -- # lslocks -p 67044 00:06:26.061 10:11:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.627 10:11:40 -- event/cpu_locks.sh@89 -- # killprocess 67044 00:06:26.627 10:11:40 -- common/autotest_common.sh@926 -- # '[' -z 67044 ']' 00:06:26.627 10:11:40 -- common/autotest_common.sh@930 -- # kill -0 67044 00:06:26.627 10:11:40 -- common/autotest_common.sh@931 -- # uname 00:06:26.627 10:11:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.627 10:11:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67044 00:06:26.886 10:11:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.886 10:11:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.886 10:11:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67044' 00:06:26.886 killing process with pid 67044 00:06:26.886 10:11:40 -- common/autotest_common.sh@945 -- # kill 67044 00:06:26.886 10:11:40 -- common/autotest_common.sh@950 -- # wait 67044 00:06:27.821 10:11:41 -- event/cpu_locks.sh@90 -- # killprocess 67060 00:06:27.821 10:11:41 -- common/autotest_common.sh@926 -- # '[' -z 67060 ']' 00:06:27.821 10:11:41 -- common/autotest_common.sh@930 -- # kill -0 67060 00:06:27.821 10:11:41 -- common/autotest_common.sh@931 -- # uname 00:06:27.821 10:11:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.821 10:11:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67060 00:06:27.821 10:11:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.821 10:11:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.821 killing process with pid 67060 00:06:27.821 10:11:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67060' 00:06:27.821 10:11:41 -- common/autotest_common.sh@945 -- # kill 67060 00:06:27.821 10:11:41 -- common/autotest_common.sh@950 -- # wait 67060 00:06:28.388 00:06:28.388 real 0m5.111s 00:06:28.388 user 0m5.498s 00:06:28.388 sys 0m1.280s 00:06:28.388 10:11:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.388 10:11:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.388 ************************************ 00:06:28.388 END TEST non_locking_app_on_locked_coremask 00:06:28.388 ************************************ 00:06:28.388 10:11:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.388 10:11:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.388 10:11:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.388 10:11:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.388 ************************************ 00:06:28.388 START TEST locking_app_on_unlocked_coremask 00:06:28.388 ************************************ 00:06:28.388 10:11:41 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:28.388 10:11:41 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67140 00:06:28.388 10:11:41 -- event/cpu_locks.sh@99 -- # waitforlisten 67140 /var/tmp/spdk.sock 00:06:28.388 10:11:41 -- common/autotest_common.sh@819 -- # '[' -z 67140 ']' 00:06:28.388 10:11:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.388 10:11:41 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.388 10:11:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.388 10:11:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.388 10:11:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.388 10:11:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.388 [2024-07-26 10:11:41.830713] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:28.388 [2024-07-26 10:11:41.830811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67140 ] 00:06:28.646 [2024-07-26 10:11:41.968642] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.646 [2024-07-26 10:11:41.968703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.646 [2024-07-26 10:11:42.079164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.646 [2024-07-26 10:11:42.079353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.626 10:11:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.626 10:11:42 -- common/autotest_common.sh@852 -- # return 0 00:06:29.626 10:11:42 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67156 00:06:29.626 10:11:42 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.626 10:11:42 -- event/cpu_locks.sh@103 -- # waitforlisten 67156 /var/tmp/spdk2.sock 00:06:29.626 10:11:42 -- common/autotest_common.sh@819 -- # '[' -z 67156 ']' 00:06:29.626 10:11:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.626 10:11:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.627 10:11:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.627 10:11:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.627 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.627 [2024-07-26 10:11:42.853540] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:29.627 [2024-07-26 10:11:42.853657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67156 ] 00:06:29.627 [2024-07-26 10:11:43.000075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.884 [2024-07-26 10:11:43.231581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.884 [2024-07-26 10:11:43.231756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.258 10:11:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.258 10:11:44 -- common/autotest_common.sh@852 -- # return 0 00:06:31.258 10:11:44 -- event/cpu_locks.sh@105 -- # locks_exist 67156 00:06:31.258 10:11:44 -- event/cpu_locks.sh@22 -- # lslocks -p 67156 00:06:31.258 10:11:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.825 10:11:45 -- event/cpu_locks.sh@107 -- # killprocess 67140 00:06:31.825 10:11:45 -- common/autotest_common.sh@926 -- # '[' -z 67140 ']' 00:06:31.825 10:11:45 -- common/autotest_common.sh@930 -- # kill -0 67140 00:06:31.825 10:11:45 -- common/autotest_common.sh@931 -- # uname 00:06:31.825 10:11:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.825 10:11:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67140 00:06:31.825 10:11:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.825 10:11:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.825 killing process with pid 67140 00:06:31.825 10:11:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67140' 00:06:31.825 10:11:45 -- common/autotest_common.sh@945 -- # kill 67140 00:06:31.825 10:11:45 -- common/autotest_common.sh@950 -- # wait 67140 00:06:32.760 10:11:46 -- event/cpu_locks.sh@108 -- # killprocess 67156 00:06:32.760 10:11:46 -- common/autotest_common.sh@926 -- # '[' -z 67156 ']' 00:06:32.760 10:11:46 -- common/autotest_common.sh@930 -- # kill -0 67156 00:06:32.760 10:11:46 -- common/autotest_common.sh@931 -- # uname 00:06:33.018 10:11:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.018 10:11:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67156 00:06:33.018 10:11:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.018 10:11:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.018 killing process with pid 67156 00:06:33.018 10:11:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67156' 00:06:33.018 10:11:46 -- common/autotest_common.sh@945 -- # kill 67156 00:06:33.018 10:11:46 -- common/autotest_common.sh@950 -- # wait 67156 00:06:33.585 00:06:33.585 real 0m5.008s 00:06:33.585 user 0m5.386s 00:06:33.585 sys 0m1.247s 00:06:33.585 10:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.585 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 ************************************ 00:06:33.585 END TEST locking_app_on_unlocked_coremask 00:06:33.585 ************************************ 00:06:33.585 10:11:46 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.585 10:11:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.585 10:11:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.585 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 ************************************ 00:06:33.585 START TEST locking_app_on_locked_coremask 00:06:33.585 ************************************ 00:06:33.585 10:11:46 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:33.585 10:11:46 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67236 00:06:33.585 10:11:46 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.585 10:11:46 -- event/cpu_locks.sh@116 -- # waitforlisten 67236 /var/tmp/spdk.sock 00:06:33.585 10:11:46 -- common/autotest_common.sh@819 -- # '[' -z 67236 ']' 00:06:33.585 10:11:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.585 10:11:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.585 10:11:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.585 10:11:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.585 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 [2024-07-26 10:11:46.889751] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:33.585 [2024-07-26 10:11:46.889872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67236 ] 00:06:33.585 [2024-07-26 10:11:47.027328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.843 [2024-07-26 10:11:47.130770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.843 [2024-07-26 10:11:47.130928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.409 10:11:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.409 10:11:47 -- common/autotest_common.sh@852 -- # return 0 00:06:34.409 10:11:47 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67252 00:06:34.409 10:11:47 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.409 10:11:47 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67252 /var/tmp/spdk2.sock 00:06:34.409 10:11:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.409 10:11:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67252 /var/tmp/spdk2.sock 00:06:34.409 10:11:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:34.409 10:11:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.409 10:11:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:34.409 10:11:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.409 10:11:47 -- common/autotest_common.sh@643 -- # waitforlisten 67252 /var/tmp/spdk2.sock 00:06:34.409 10:11:47 -- common/autotest_common.sh@819 -- # '[' -z 67252 ']' 00:06:34.409 10:11:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.409 10:11:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.409 10:11:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.409 10:11:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.409 10:11:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.667 [2024-07-26 10:11:47.917353] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:34.667 [2024-07-26 10:11:47.917459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67252 ] 00:06:34.667 [2024-07-26 10:11:48.062442] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67236 has claimed it. 00:06:34.667 [2024-07-26 10:11:48.062519] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.233 ERROR: process (pid: 67252) is no longer running 00:06:35.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67252) - No such process 00:06:35.233 10:11:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.233 10:11:48 -- common/autotest_common.sh@852 -- # return 1 00:06:35.233 10:11:48 -- common/autotest_common.sh@643 -- # es=1 00:06:35.233 10:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.233 10:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:35.233 10:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.233 10:11:48 -- event/cpu_locks.sh@122 -- # locks_exist 67236 00:06:35.233 10:11:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.233 10:11:48 -- event/cpu_locks.sh@22 -- # lslocks -p 67236 00:06:35.800 10:11:49 -- event/cpu_locks.sh@124 -- # killprocess 67236 00:06:35.800 10:11:49 -- common/autotest_common.sh@926 -- # '[' -z 67236 ']' 00:06:35.800 10:11:49 -- common/autotest_common.sh@930 -- # kill -0 67236 00:06:35.800 10:11:49 -- common/autotest_common.sh@931 -- # uname 00:06:35.800 10:11:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.800 10:11:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67236 00:06:35.800 killing process with pid 67236 00:06:35.800 10:11:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.800 10:11:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.800 10:11:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67236' 00:06:35.800 10:11:49 -- common/autotest_common.sh@945 -- # kill 67236 00:06:35.800 10:11:49 -- common/autotest_common.sh@950 -- # wait 67236 00:06:36.428 00:06:36.428 real 0m2.776s 00:06:36.428 user 0m3.027s 00:06:36.428 sys 0m0.747s 00:06:36.428 10:11:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.428 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.428 ************************************ 00:06:36.428 END TEST locking_app_on_locked_coremask 00:06:36.428 ************************************ 00:06:36.428 10:11:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.428 10:11:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.428 10:11:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.428 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.428 ************************************ 00:06:36.428 START TEST locking_overlapped_coremask 00:06:36.428 ************************************ 00:06:36.428 10:11:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:36.428 10:11:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67303 00:06:36.428 10:11:49 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.428 10:11:49 -- event/cpu_locks.sh@133 -- # waitforlisten 67303 /var/tmp/spdk.sock 00:06:36.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.428 10:11:49 -- common/autotest_common.sh@819 -- # '[' -z 67303 ']' 00:06:36.428 10:11:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.428 10:11:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.428 10:11:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.428 10:11:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.428 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.428 [2024-07-26 10:11:49.722213] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:36.428 [2024-07-26 10:11:49.722327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67303 ] 00:06:36.702 [2024-07-26 10:11:49.864193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.702 [2024-07-26 10:11:49.962859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.702 [2024-07-26 10:11:49.963400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.702 [2024-07-26 10:11:49.963493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.703 [2024-07-26 10:11:49.963496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.269 10:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.269 10:11:50 -- common/autotest_common.sh@852 -- # return 0 00:06:37.269 10:11:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67321 00:06:37.269 10:11:50 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.269 10:11:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67321 /var/tmp/spdk2.sock 00:06:37.269 10:11:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.269 10:11:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67321 /var/tmp/spdk2.sock 00:06:37.269 10:11:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:37.269 10:11:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.269 10:11:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:37.269 10:11:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.269 10:11:50 -- common/autotest_common.sh@643 -- # waitforlisten 67321 /var/tmp/spdk2.sock 00:06:37.269 10:11:50 -- common/autotest_common.sh@819 -- # '[' -z 67321 ']' 00:06:37.269 10:11:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.269 10:11:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.269 10:11:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.269 10:11:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.269 10:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.528 [2024-07-26 10:11:50.762148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:37.528 [2024-07-26 10:11:50.762521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67321 ] 00:06:37.529 [2024-07-26 10:11:50.909217] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67303 has claimed it. 00:06:37.529 [2024-07-26 10:11:50.909284] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.096 ERROR: process (pid: 67321) is no longer running 00:06:38.096 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67321) - No such process 00:06:38.096 10:11:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.096 10:11:51 -- common/autotest_common.sh@852 -- # return 1 00:06:38.096 10:11:51 -- common/autotest_common.sh@643 -- # es=1 00:06:38.096 10:11:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.096 10:11:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.096 10:11:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.096 10:11:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.096 10:11:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.096 10:11:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.096 10:11:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.096 10:11:51 -- event/cpu_locks.sh@141 -- # killprocess 67303 00:06:38.096 10:11:51 -- common/autotest_common.sh@926 -- # '[' -z 67303 ']' 00:06:38.096 10:11:51 -- common/autotest_common.sh@930 -- # kill -0 67303 00:06:38.096 10:11:51 -- common/autotest_common.sh@931 -- # uname 00:06:38.096 10:11:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.096 10:11:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67303 00:06:38.096 10:11:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.096 10:11:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.096 10:11:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67303' 00:06:38.096 killing process with pid 67303 00:06:38.096 10:11:51 -- common/autotest_common.sh@945 -- # kill 67303 00:06:38.096 10:11:51 -- common/autotest_common.sh@950 -- # wait 67303 00:06:38.663 00:06:38.663 real 0m2.223s 00:06:38.663 user 0m6.224s 00:06:38.663 sys 0m0.432s 00:06:38.663 10:11:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.663 10:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.663 ************************************ 00:06:38.663 END TEST locking_overlapped_coremask 00:06:38.663 ************************************ 00:06:38.663 10:11:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.663 10:11:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.663 10:11:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.663 10:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.663 ************************************ 00:06:38.663 START TEST locking_overlapped_coremask_via_rpc 00:06:38.663 ************************************ 00:06:38.663 10:11:51 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:38.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.663 10:11:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67361 00:06:38.663 10:11:51 -- event/cpu_locks.sh@149 -- # waitforlisten 67361 /var/tmp/spdk.sock 00:06:38.663 10:11:51 -- common/autotest_common.sh@819 -- # '[' -z 67361 ']' 00:06:38.663 10:11:51 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.663 10:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.663 10:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.663 10:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.663 10:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.663 10:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.663 [2024-07-26 10:11:51.996310] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:38.663 [2024-07-26 10:11:51.996423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67361 ] 00:06:38.921 [2024-07-26 10:11:52.130896] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.921 [2024-07-26 10:11:52.130943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.921 [2024-07-26 10:11:52.223353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.921 [2024-07-26 10:11:52.223856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.921 [2024-07-26 10:11:52.223934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.921 [2024-07-26 10:11:52.223939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.489 10:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.489 10:11:52 -- common/autotest_common.sh@852 -- # return 0 00:06:39.489 10:11:52 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.489 10:11:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67379 00:06:39.489 10:11:52 -- event/cpu_locks.sh@153 -- # waitforlisten 67379 /var/tmp/spdk2.sock 00:06:39.489 10:11:52 -- common/autotest_common.sh@819 -- # '[' -z 67379 ']' 00:06:39.489 10:11:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.489 10:11:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.489 10:11:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.489 10:11:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.489 10:11:52 -- common/autotest_common.sh@10 -- # set +x 00:06:39.748 [2024-07-26 10:11:52.980250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:39.748 [2024-07-26 10:11:52.980323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67379 ] 00:06:39.748 [2024-07-26 10:11:53.120711] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.748 [2024-07-26 10:11:53.120793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.006 [2024-07-26 10:11:53.291585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.006 [2024-07-26 10:11:53.291926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.006 [2024-07-26 10:11:53.295744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.006 [2024-07-26 10:11:53.295745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.572 10:11:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.572 10:11:53 -- common/autotest_common.sh@852 -- # return 0 00:06:40.572 10:11:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.572 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.572 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:40.572 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.572 10:11:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.572 10:11:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.572 10:11:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.572 10:11:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:40.572 10:11:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.572 10:11:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:40.572 10:11:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.572 10:11:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.572 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.572 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:40.572 [2024-07-26 10:11:53.965742] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67361 has claimed it. 00:06:40.572 request: 00:06:40.572 { 00:06:40.572 "method": "framework_enable_cpumask_locks", 00:06:40.572 "req_id": 1 00:06:40.572 } 00:06:40.572 Got JSON-RPC error response 00:06:40.572 response: 00:06:40.572 { 00:06:40.572 "code": -32603, 00:06:40.572 "message": "Failed to claim CPU core: 2" 00:06:40.572 } 00:06:40.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.572 10:11:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:40.572 10:11:53 -- common/autotest_common.sh@643 -- # es=1 00:06:40.572 10:11:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:40.572 10:11:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:40.572 10:11:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:40.572 10:11:53 -- event/cpu_locks.sh@158 -- # waitforlisten 67361 /var/tmp/spdk.sock 00:06:40.572 10:11:53 -- common/autotest_common.sh@819 -- # '[' -z 67361 ']' 00:06:40.572 10:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.572 10:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.572 10:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.572 10:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.572 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:40.830 10:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.830 10:11:54 -- common/autotest_common.sh@852 -- # return 0 00:06:40.830 10:11:54 -- event/cpu_locks.sh@159 -- # waitforlisten 67379 /var/tmp/spdk2.sock 00:06:40.830 10:11:54 -- common/autotest_common.sh@819 -- # '[' -z 67379 ']' 00:06:40.830 10:11:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.830 10:11:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.830 10:11:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.830 10:11:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.830 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.089 10:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.089 10:11:54 -- common/autotest_common.sh@852 -- # return 0 00:06:41.089 10:11:54 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.089 10:11:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.089 10:11:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.089 ************************************ 00:06:41.089 END TEST locking_overlapped_coremask_via_rpc 00:06:41.089 ************************************ 00:06:41.089 10:11:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.089 00:06:41.089 real 0m2.567s 00:06:41.089 user 0m1.307s 00:06:41.089 sys 0m0.182s 00:06:41.089 10:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.089 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.089 10:11:54 -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.089 10:11:54 -- event/cpu_locks.sh@15 -- # [[ -z 67361 ]] 00:06:41.089 10:11:54 -- event/cpu_locks.sh@15 -- # killprocess 67361 00:06:41.089 10:11:54 -- common/autotest_common.sh@926 -- # '[' -z 67361 ']' 00:06:41.089 10:11:54 -- common/autotest_common.sh@930 -- # kill -0 67361 00:06:41.089 10:11:54 -- common/autotest_common.sh@931 -- # uname 00:06:41.089 10:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.089 10:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67361 00:06:41.347 killing process with pid 67361 00:06:41.347 10:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.347 10:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.347 10:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67361' 00:06:41.347 10:11:54 -- common/autotest_common.sh@945 -- # kill 67361 00:06:41.347 10:11:54 -- common/autotest_common.sh@950 -- # wait 67361 00:06:41.605 10:11:54 -- event/cpu_locks.sh@16 -- # [[ -z 67379 ]] 00:06:41.605 10:11:54 -- event/cpu_locks.sh@16 -- # killprocess 67379 00:06:41.605 10:11:54 -- common/autotest_common.sh@926 -- # '[' -z 67379 ']' 00:06:41.605 10:11:54 -- common/autotest_common.sh@930 -- # kill -0 67379 00:06:41.605 10:11:54 -- common/autotest_common.sh@931 -- # uname 00:06:41.605 10:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.605 10:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67379 00:06:41.605 killing process with pid 67379 00:06:41.605 10:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:41.605 10:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:41.605 10:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67379' 00:06:41.605 10:11:54 -- common/autotest_common.sh@945 -- # kill 67379 00:06:41.605 10:11:54 -- common/autotest_common.sh@950 -- # wait 67379 00:06:42.172 10:11:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.172 10:11:55 -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.172 10:11:55 -- event/cpu_locks.sh@15 -- # [[ -z 67361 ]] 00:06:42.172 10:11:55 -- event/cpu_locks.sh@15 -- # killprocess 67361 00:06:42.172 10:11:55 -- common/autotest_common.sh@926 -- # '[' -z 67361 ']' 00:06:42.172 10:11:55 -- common/autotest_common.sh@930 -- # kill -0 67361 00:06:42.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67361) - No such process 00:06:42.172 Process with pid 67361 is not found 00:06:42.172 10:11:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67361 is not found' 00:06:42.172 10:11:55 -- event/cpu_locks.sh@16 -- # [[ -z 67379 ]] 00:06:42.172 Process with pid 67379 is not found 00:06:42.172 10:11:55 -- event/cpu_locks.sh@16 -- # killprocess 67379 00:06:42.172 10:11:55 -- common/autotest_common.sh@926 -- # '[' -z 67379 ']' 00:06:42.172 10:11:55 -- common/autotest_common.sh@930 -- # kill -0 67379 00:06:42.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67379) - No such process 00:06:42.172 10:11:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67379 is not found' 00:06:42.172 10:11:55 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.172 00:06:42.172 real 0m22.962s 00:06:42.172 user 0m37.704s 00:06:42.172 sys 0m6.023s 00:06:42.172 10:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.172 10:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 END TEST cpu_locks 00:06:42.172 ************************************ 00:06:42.172 00:06:42.172 real 0m49.524s 00:06:42.172 user 1m32.328s 00:06:42.172 sys 0m9.550s 00:06:42.172 10:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.172 10:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 END TEST event 00:06:42.172 ************************************ 00:06:42.172 10:11:55 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.172 10:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.172 10:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.172 10:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 START TEST thread 00:06:42.172 ************************************ 00:06:42.172 10:11:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.172 * Looking for test storage... 00:06:42.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:42.172 10:11:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.172 10:11:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.172 10:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.172 10:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 START TEST thread_poller_perf 00:06:42.172 ************************************ 00:06:42.172 10:11:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.172 [2024-07-26 10:11:55.522780] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:42.172 [2024-07-26 10:11:55.522862] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67506 ] 00:06:42.430 [2024-07-26 10:11:55.659874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.430 [2024-07-26 10:11:55.739135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.430 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.403 ====================================== 00:06:43.403 busy:2210797921 (cyc) 00:06:43.403 total_run_count: 294000 00:06:43.403 tsc_hz: 2200000000 (cyc) 00:06:43.403 ====================================== 00:06:43.403 poller_cost: 7519 (cyc), 3417 (nsec) 00:06:43.403 00:06:43.403 real 0m1.308s 00:06:43.403 user 0m1.136s 00:06:43.403 sys 0m0.061s 00:06:43.403 10:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.403 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:43.403 ************************************ 00:06:43.403 END TEST thread_poller_perf 00:06:43.403 ************************************ 00:06:43.661 10:11:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.661 10:11:56 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:43.661 10:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.661 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:43.661 ************************************ 00:06:43.661 START TEST thread_poller_perf 00:06:43.661 ************************************ 00:06:43.661 10:11:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.662 [2024-07-26 10:11:56.887883] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:43.662 [2024-07-26 10:11:56.887988] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67536 ] 00:06:43.662 [2024-07-26 10:11:57.021201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.662 [2024-07-26 10:11:57.095903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.662 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.033 ====================================== 00:06:45.033 busy:2203301167 (cyc) 00:06:45.033 total_run_count: 4386000 00:06:45.033 tsc_hz: 2200000000 (cyc) 00:06:45.033 ====================================== 00:06:45.033 poller_cost: 502 (cyc), 228 (nsec) 00:06:45.033 00:06:45.033 real 0m1.288s 00:06:45.033 user 0m1.130s 00:06:45.033 sys 0m0.050s 00:06:45.033 10:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.033 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.033 ************************************ 00:06:45.033 END TEST thread_poller_perf 00:06:45.033 ************************************ 00:06:45.033 10:11:58 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.033 ************************************ 00:06:45.033 END TEST thread 00:06:45.033 ************************************ 00:06:45.033 00:06:45.033 real 0m2.778s 00:06:45.033 user 0m2.327s 00:06:45.033 sys 0m0.225s 00:06:45.033 10:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.034 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.034 10:11:58 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.034 10:11:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.034 10:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.034 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.034 ************************************ 00:06:45.034 START TEST accel 00:06:45.034 ************************************ 00:06:45.034 10:11:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.034 * Looking for test storage... 00:06:45.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:45.034 10:11:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:45.034 10:11:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:45.034 10:11:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.034 10:11:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=67612 00:06:45.034 10:11:58 -- accel/accel.sh@60 -- # waitforlisten 67612 00:06:45.034 10:11:58 -- common/autotest_common.sh@819 -- # '[' -z 67612 ']' 00:06:45.034 10:11:58 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.034 10:11:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.034 10:11:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.034 10:11:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.034 10:11:58 -- accel/accel.sh@58 -- # build_accel_config 00:06:45.034 10:11:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.034 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.034 10:11:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.034 10:11:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.034 10:11:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.034 10:11:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.034 10:11:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.034 10:11:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.034 10:11:58 -- accel/accel.sh@42 -- # jq -r . 00:06:45.034 [2024-07-26 10:11:58.379293] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:45.034 [2024-07-26 10:11:58.379404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67612 ] 00:06:45.292 [2024-07-26 10:11:58.513395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.292 [2024-07-26 10:11:58.600570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.292 [2024-07-26 10:11:58.600762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.858 10:11:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.858 10:11:59 -- common/autotest_common.sh@852 -- # return 0 00:06:45.858 10:11:59 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:45.858 10:11:59 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:45.858 10:11:59 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:46.117 10:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.117 10:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.117 10:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # IFS== 00:06:46.117 10:11:59 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.117 10:11:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.117 10:11:59 -- accel/accel.sh@67 -- # killprocess 67612 00:06:46.117 10:11:59 -- common/autotest_common.sh@926 -- # '[' -z 67612 ']' 00:06:46.117 10:11:59 -- common/autotest_common.sh@930 -- # kill -0 67612 00:06:46.117 10:11:59 -- common/autotest_common.sh@931 -- # uname 00:06:46.117 10:11:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.117 10:11:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67612 00:06:46.118 10:11:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.118 10:11:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.118 killing process with pid 67612 00:06:46.118 10:11:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67612' 00:06:46.118 10:11:59 -- common/autotest_common.sh@945 -- # kill 67612 00:06:46.118 10:11:59 -- common/autotest_common.sh@950 -- # wait 67612 00:06:46.376 10:11:59 -- accel/accel.sh@68 -- # trap - ERR 00:06:46.376 10:11:59 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:46.376 10:11:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:46.376 10:11:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.376 10:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.376 10:11:59 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:46.376 10:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:46.376 10:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.376 10:11:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.377 10:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:11:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.377 10:11:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.377 10:11:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.377 10:11:59 -- accel/accel.sh@42 -- # jq -r . 00:06:46.377 10:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.377 10:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.377 10:11:59 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:46.377 10:11:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.377 10:11:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.377 10:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.635 ************************************ 00:06:46.635 START TEST accel_missing_filename 00:06:46.635 ************************************ 00:06:46.635 10:11:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:46.635 10:11:59 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.635 10:11:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.635 10:11:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.635 10:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.635 10:11:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.635 10:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.635 10:11:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:46.635 10:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.635 10:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.635 10:11:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.635 10:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.635 10:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.635 10:11:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.635 10:11:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.635 10:11:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.635 10:11:59 -- accel/accel.sh@42 -- # jq -r . 00:06:46.635 [2024-07-26 10:11:59.860909] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:46.635 [2024-07-26 10:11:59.861005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67663 ] 00:06:46.635 [2024-07-26 10:11:59.999140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.635 [2024-07-26 10:12:00.091615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.894 [2024-07-26 10:12:00.147869] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.894 [2024-07-26 10:12:00.226459] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.894 A filename is required. 00:06:46.894 10:12:00 -- common/autotest_common.sh@643 -- # es=234 00:06:46.894 10:12:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.894 10:12:00 -- common/autotest_common.sh@652 -- # es=106 00:06:46.894 10:12:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.894 10:12:00 -- common/autotest_common.sh@660 -- # es=1 00:06:46.894 10:12:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.894 00:06:46.894 real 0m0.479s 00:06:46.894 user 0m0.310s 00:06:46.894 sys 0m0.116s 00:06:46.894 10:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.894 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.894 ************************************ 00:06:46.894 END TEST accel_missing_filename 00:06:46.894 ************************************ 00:06:46.894 10:12:00 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.894 10:12:00 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:46.894 10:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.894 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.153 ************************************ 00:06:47.153 START TEST accel_compress_verify 00:06:47.153 ************************************ 00:06:47.153 10:12:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.153 10:12:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.153 10:12:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.153 10:12:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.153 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.153 10:12:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.153 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.153 10:12:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.153 10:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.153 10:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.153 10:12:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.153 10:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.153 10:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.153 10:12:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.153 10:12:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.153 10:12:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.153 10:12:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.153 [2024-07-26 10:12:00.377438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:47.153 [2024-07-26 10:12:00.377527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67688 ] 00:06:47.153 [2024-07-26 10:12:00.508546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.153 [2024-07-26 10:12:00.577607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.411 [2024-07-26 10:12:00.631612] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.411 [2024-07-26 10:12:00.711176] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:47.411 00:06:47.411 Compression does not support the verify option, aborting. 00:06:47.411 10:12:00 -- common/autotest_common.sh@643 -- # es=161 00:06:47.411 10:12:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.411 10:12:00 -- common/autotest_common.sh@652 -- # es=33 00:06:47.411 10:12:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:47.411 10:12:00 -- common/autotest_common.sh@660 -- # es=1 00:06:47.411 10:12:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.411 00:06:47.411 real 0m0.424s 00:06:47.411 user 0m0.254s 00:06:47.411 sys 0m0.112s 00:06:47.411 10:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.411 ************************************ 00:06:47.411 END TEST accel_compress_verify 00:06:47.411 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.411 ************************************ 00:06:47.411 10:12:00 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:47.412 10:12:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:47.412 10:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.412 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.412 ************************************ 00:06:47.412 START TEST accel_wrong_workload 00:06:47.412 ************************************ 00:06:47.412 10:12:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:47.412 10:12:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.412 10:12:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:47.412 10:12:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.412 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.412 10:12:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.412 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.412 10:12:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:47.412 10:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:47.412 10:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.412 10:12:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.412 10:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.412 10:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.412 10:12:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.412 10:12:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.412 10:12:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.412 10:12:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.412 Unsupported workload type: foobar 00:06:47.412 [2024-07-26 10:12:00.850140] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:47.412 accel_perf options: 00:06:47.412 [-h help message] 00:06:47.412 [-q queue depth per core] 00:06:47.412 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.412 [-T number of threads per core 00:06:47.412 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.412 [-t time in seconds] 00:06:47.412 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.412 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.412 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.412 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.412 [-S for crc32c workload, use this seed value (default 0) 00:06:47.412 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.412 [-f for fill workload, use this BYTE value (default 255) 00:06:47.412 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.412 [-y verify result if this switch is on] 00:06:47.412 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.412 Can be used to spread operations across a wider range of memory. 00:06:47.412 10:12:00 -- common/autotest_common.sh@643 -- # es=1 00:06:47.412 10:12:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.412 10:12:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.412 10:12:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.412 00:06:47.412 real 0m0.033s 00:06:47.412 user 0m0.019s 00:06:47.412 sys 0m0.014s 00:06:47.412 10:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.412 ************************************ 00:06:47.412 END TEST accel_wrong_workload 00:06:47.412 ************************************ 00:06:47.412 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.670 10:12:00 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.670 10:12:00 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:47.670 10:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.670 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.670 ************************************ 00:06:47.670 START TEST accel_negative_buffers 00:06:47.670 ************************************ 00:06:47.670 10:12:00 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.670 10:12:00 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.670 10:12:00 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:47.670 10:12:00 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.670 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.670 10:12:00 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.670 10:12:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.670 10:12:00 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:47.670 10:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:47.670 10:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.670 10:12:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.670 10:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.670 10:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.670 10:12:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.670 10:12:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.670 10:12:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.670 10:12:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.670 -x option must be non-negative. 00:06:47.670 [2024-07-26 10:12:00.921407] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:47.670 accel_perf options: 00:06:47.670 [-h help message] 00:06:47.670 [-q queue depth per core] 00:06:47.670 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.670 [-T number of threads per core 00:06:47.670 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.670 [-t time in seconds] 00:06:47.670 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.670 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.670 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.670 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.670 [-S for crc32c workload, use this seed value (default 0) 00:06:47.670 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.670 [-f for fill workload, use this BYTE value (default 255) 00:06:47.670 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.670 [-y verify result if this switch is on] 00:06:47.670 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.670 Can be used to spread operations across a wider range of memory. 00:06:47.670 10:12:00 -- common/autotest_common.sh@643 -- # es=1 00:06:47.670 10:12:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.670 10:12:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.670 10:12:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.670 00:06:47.670 real 0m0.027s 00:06:47.670 user 0m0.017s 00:06:47.670 sys 0m0.007s 00:06:47.670 10:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.670 ************************************ 00:06:47.670 END TEST accel_negative_buffers 00:06:47.670 ************************************ 00:06:47.671 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.671 10:12:00 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:47.671 10:12:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.671 10:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.671 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.671 ************************************ 00:06:47.671 START TEST accel_crc32c 00:06:47.671 ************************************ 00:06:47.671 10:12:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:47.671 10:12:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.671 10:12:00 -- accel/accel.sh@17 -- # local accel_module 00:06:47.671 10:12:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.671 10:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.671 10:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.671 10:12:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.671 10:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.671 10:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.671 10:12:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.671 10:12:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.671 10:12:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.671 10:12:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.671 [2024-07-26 10:12:00.996024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:47.671 [2024-07-26 10:12:00.996122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67746 ] 00:06:47.969 [2024-07-26 10:12:01.137551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.969 [2024-07-26 10:12:01.224989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.343 10:12:02 -- accel/accel.sh@18 -- # out=' 00:06:49.343 SPDK Configuration: 00:06:49.343 Core mask: 0x1 00:06:49.343 00:06:49.343 Accel Perf Configuration: 00:06:49.343 Workload Type: crc32c 00:06:49.343 CRC-32C seed: 32 00:06:49.343 Transfer size: 4096 bytes 00:06:49.343 Vector count 1 00:06:49.343 Module: software 00:06:49.343 Queue depth: 32 00:06:49.343 Allocate depth: 32 00:06:49.343 # threads/core: 1 00:06:49.343 Run time: 1 seconds 00:06:49.343 Verify: Yes 00:06:49.343 00:06:49.343 Running for 1 seconds... 00:06:49.343 00:06:49.343 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.343 ------------------------------------------------------------------------------------ 00:06:49.343 0,0 441600/s 1725 MiB/s 0 0 00:06:49.343 ==================================================================================== 00:06:49.343 Total 441600/s 1725 MiB/s 0 0' 00:06:49.343 10:12:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:49.343 10:12:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.343 10:12:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.343 10:12:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.343 10:12:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.343 10:12:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.343 10:12:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.343 10:12:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.343 10:12:02 -- accel/accel.sh@42 -- # jq -r . 00:06:49.343 [2024-07-26 10:12:02.449904] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:49.343 [2024-07-26 10:12:02.450003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67766 ] 00:06:49.343 [2024-07-26 10:12:02.580303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.343 [2024-07-26 10:12:02.658581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val=0x1 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.343 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.343 10:12:02 -- accel/accel.sh@21 -- # val=crc32c 00:06:49.343 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=32 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=software 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=32 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=32 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=1 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val=Yes 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 10:12:02 -- accel/accel.sh@21 -- # val= 00:06:49.344 10:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 10:12:02 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@21 -- # val= 00:06:50.728 10:12:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.728 10:12:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.728 10:12:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.728 10:12:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:50.728 10:12:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.728 00:06:50.728 real 0m2.890s 00:06:50.728 user 0m2.456s 00:06:50.728 sys 0m0.228s 00:06:50.728 10:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.728 ************************************ 00:06:50.728 END TEST accel_crc32c 00:06:50.728 ************************************ 00:06:50.728 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.728 10:12:03 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.728 10:12:03 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:50.728 10:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.728 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.728 ************************************ 00:06:50.728 START TEST accel_crc32c_C2 00:06:50.728 ************************************ 00:06:50.728 10:12:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.728 10:12:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.728 10:12:03 -- accel/accel.sh@17 -- # local accel_module 00:06:50.728 10:12:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.728 10:12:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.728 10:12:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.728 10:12:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.728 10:12:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.728 10:12:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.728 10:12:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.728 10:12:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.728 10:12:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.728 10:12:03 -- accel/accel.sh@42 -- # jq -r . 00:06:50.728 [2024-07-26 10:12:03.928775] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:50.728 [2024-07-26 10:12:03.928873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67800 ] 00:06:50.728 [2024-07-26 10:12:04.066106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.728 [2024-07-26 10:12:04.148335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.102 10:12:05 -- accel/accel.sh@18 -- # out=' 00:06:52.102 SPDK Configuration: 00:06:52.102 Core mask: 0x1 00:06:52.102 00:06:52.102 Accel Perf Configuration: 00:06:52.102 Workload Type: crc32c 00:06:52.102 CRC-32C seed: 0 00:06:52.102 Transfer size: 4096 bytes 00:06:52.102 Vector count 2 00:06:52.102 Module: software 00:06:52.102 Queue depth: 32 00:06:52.102 Allocate depth: 32 00:06:52.102 # threads/core: 1 00:06:52.102 Run time: 1 seconds 00:06:52.102 Verify: Yes 00:06:52.102 00:06:52.102 Running for 1 seconds... 00:06:52.102 00:06:52.102 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.102 ------------------------------------------------------------------------------------ 00:06:52.102 0,0 345696/s 2700 MiB/s 0 0 00:06:52.102 ==================================================================================== 00:06:52.102 Total 345696/s 1350 MiB/s 0 0' 00:06:52.102 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.102 10:12:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:52.102 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.102 10:12:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.102 10:12:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:52.102 10:12:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.102 10:12:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.102 10:12:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.102 10:12:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.102 10:12:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.102 10:12:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.102 10:12:05 -- accel/accel.sh@42 -- # jq -r . 00:06:52.102 [2024-07-26 10:12:05.378988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:52.102 [2024-07-26 10:12:05.379099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67814 ] 00:06:52.102 [2024-07-26 10:12:05.516327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.360 [2024-07-26 10:12:05.608874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=0x1 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=crc32c 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=0 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=software 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=32 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=32 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=1 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val=Yes 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.360 10:12:05 -- accel/accel.sh@21 -- # val= 00:06:52.360 10:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.360 10:12:05 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@21 -- # val= 00:06:53.734 10:12:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.734 10:12:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.734 10:12:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.734 10:12:06 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:53.734 10:12:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.734 00:06:53.734 real 0m2.912s 00:06:53.734 user 0m2.490s 00:06:53.734 sys 0m0.223s 00:06:53.734 10:12:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.734 10:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.734 ************************************ 00:06:53.734 END TEST accel_crc32c_C2 00:06:53.734 ************************************ 00:06:53.734 10:12:06 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:53.734 10:12:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.734 10:12:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.734 10:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.734 ************************************ 00:06:53.734 START TEST accel_copy 00:06:53.734 ************************************ 00:06:53.734 10:12:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:53.734 10:12:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.734 10:12:06 -- accel/accel.sh@17 -- # local accel_module 00:06:53.734 10:12:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:53.734 10:12:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.734 10:12:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.734 10:12:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.734 10:12:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.734 10:12:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.734 10:12:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.734 10:12:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.734 10:12:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.734 10:12:06 -- accel/accel.sh@42 -- # jq -r . 00:06:53.734 [2024-07-26 10:12:06.891702] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:53.734 [2024-07-26 10:12:06.891795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67849 ] 00:06:53.734 [2024-07-26 10:12:07.030135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.734 [2024-07-26 10:12:07.112237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.145 10:12:08 -- accel/accel.sh@18 -- # out=' 00:06:55.145 SPDK Configuration: 00:06:55.145 Core mask: 0x1 00:06:55.145 00:06:55.145 Accel Perf Configuration: 00:06:55.145 Workload Type: copy 00:06:55.145 Transfer size: 4096 bytes 00:06:55.145 Vector count 1 00:06:55.145 Module: software 00:06:55.145 Queue depth: 32 00:06:55.145 Allocate depth: 32 00:06:55.145 # threads/core: 1 00:06:55.145 Run time: 1 seconds 00:06:55.145 Verify: Yes 00:06:55.145 00:06:55.145 Running for 1 seconds... 00:06:55.145 00:06:55.145 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.145 ------------------------------------------------------------------------------------ 00:06:55.145 0,0 311680/s 1217 MiB/s 0 0 00:06:55.145 ==================================================================================== 00:06:55.145 Total 311680/s 1217 MiB/s 0 0' 00:06:55.145 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.145 10:12:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:55.145 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.145 10:12:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:55.145 10:12:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.145 10:12:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.145 10:12:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.145 10:12:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.145 10:12:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.145 10:12:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.145 10:12:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.145 10:12:08 -- accel/accel.sh@42 -- # jq -r . 00:06:55.145 [2024-07-26 10:12:08.324733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:55.145 [2024-07-26 10:12:08.324827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67868 ] 00:06:55.145 [2024-07-26 10:12:08.456077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.146 [2024-07-26 10:12:08.537106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=0x1 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=copy 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=software 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=32 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=32 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=1 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val=Yes 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.146 10:12:08 -- accel/accel.sh@21 -- # val= 00:06:55.146 10:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.146 10:12:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@21 -- # val= 00:06:56.520 10:12:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.520 10:12:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.520 10:12:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.520 10:12:09 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:56.520 10:12:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.520 00:06:56.520 real 0m2.870s 00:06:56.520 user 0m2.443s 00:06:56.520 sys 0m0.221s 00:06:56.520 10:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.520 10:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.520 ************************************ 00:06:56.521 END TEST accel_copy 00:06:56.521 ************************************ 00:06:56.521 10:12:09 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.521 10:12:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:56.521 10:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.521 10:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.521 ************************************ 00:06:56.521 START TEST accel_fill 00:06:56.521 ************************************ 00:06:56.521 10:12:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.521 10:12:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.521 10:12:09 -- accel/accel.sh@17 -- # local accel_module 00:06:56.521 10:12:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.521 10:12:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.521 10:12:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.521 10:12:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.521 10:12:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.521 10:12:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.521 10:12:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.521 10:12:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.521 10:12:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.521 10:12:09 -- accel/accel.sh@42 -- # jq -r . 00:06:56.521 [2024-07-26 10:12:09.798743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:56.521 [2024-07-26 10:12:09.799431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67903 ] 00:06:56.521 [2024-07-26 10:12:09.934034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.780 [2024-07-26 10:12:10.021109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.154 10:12:11 -- accel/accel.sh@18 -- # out=' 00:06:58.154 SPDK Configuration: 00:06:58.154 Core mask: 0x1 00:06:58.154 00:06:58.154 Accel Perf Configuration: 00:06:58.154 Workload Type: fill 00:06:58.154 Fill pattern: 0x80 00:06:58.154 Transfer size: 4096 bytes 00:06:58.154 Vector count 1 00:06:58.154 Module: software 00:06:58.154 Queue depth: 64 00:06:58.154 Allocate depth: 64 00:06:58.154 # threads/core: 1 00:06:58.154 Run time: 1 seconds 00:06:58.154 Verify: Yes 00:06:58.154 00:06:58.154 Running for 1 seconds... 00:06:58.154 00:06:58.154 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.154 ------------------------------------------------------------------------------------ 00:06:58.154 0,0 473664/s 1850 MiB/s 0 0 00:06:58.154 ==================================================================================== 00:06:58.154 Total 473664/s 1850 MiB/s 0 0' 00:06:58.154 10:12:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.154 10:12:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.154 10:12:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.154 10:12:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.154 10:12:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.154 10:12:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.154 10:12:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.154 10:12:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.154 10:12:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.154 [2024-07-26 10:12:11.246281] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:58.154 [2024-07-26 10:12:11.246405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67922 ] 00:06:58.154 [2024-07-26 10:12:11.383823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.154 [2024-07-26 10:12:11.447402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=0x1 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=fill 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=0x80 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=software 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=64 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=64 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=1 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val=Yes 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.154 10:12:11 -- accel/accel.sh@21 -- # val= 00:06:58.154 10:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.154 10:12:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@21 -- # val= 00:06:59.529 10:12:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 10:12:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 10:12:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.529 10:12:12 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:59.529 10:12:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.529 00:06:59.529 real 0m2.865s 00:06:59.529 user 0m2.442s 00:06:59.529 sys 0m0.221s 00:06:59.529 10:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.529 10:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.529 ************************************ 00:06:59.529 END TEST accel_fill 00:06:59.529 ************************************ 00:06:59.529 10:12:12 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:59.529 10:12:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:59.529 10:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.529 10:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.529 ************************************ 00:06:59.529 START TEST accel_copy_crc32c 00:06:59.529 ************************************ 00:06:59.529 10:12:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:59.529 10:12:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.529 10:12:12 -- accel/accel.sh@17 -- # local accel_module 00:06:59.529 10:12:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:59.529 10:12:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:59.529 10:12:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.529 10:12:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.529 10:12:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.529 10:12:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.529 10:12:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.529 10:12:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.529 10:12:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.529 10:12:12 -- accel/accel.sh@42 -- # jq -r . 00:06:59.529 [2024-07-26 10:12:12.706290] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:59.529 [2024-07-26 10:12:12.706373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67959 ] 00:06:59.529 [2024-07-26 10:12:12.836305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.529 [2024-07-26 10:12:12.919784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.931 10:12:14 -- accel/accel.sh@18 -- # out=' 00:07:00.931 SPDK Configuration: 00:07:00.931 Core mask: 0x1 00:07:00.931 00:07:00.931 Accel Perf Configuration: 00:07:00.931 Workload Type: copy_crc32c 00:07:00.931 CRC-32C seed: 0 00:07:00.931 Vector size: 4096 bytes 00:07:00.931 Transfer size: 4096 bytes 00:07:00.931 Vector count 1 00:07:00.931 Module: software 00:07:00.931 Queue depth: 32 00:07:00.931 Allocate depth: 32 00:07:00.931 # threads/core: 1 00:07:00.931 Run time: 1 seconds 00:07:00.931 Verify: Yes 00:07:00.931 00:07:00.931 Running for 1 seconds... 00:07:00.931 00:07:00.931 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.931 ------------------------------------------------------------------------------------ 00:07:00.931 0,0 255456/s 997 MiB/s 0 0 00:07:00.931 ==================================================================================== 00:07:00.931 Total 255456/s 997 MiB/s 0 0' 00:07:00.931 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.931 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.931 10:12:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:00.931 10:12:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.931 10:12:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:00.931 10:12:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.931 10:12:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.931 10:12:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.931 10:12:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.931 10:12:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.931 10:12:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.931 10:12:14 -- accel/accel.sh@42 -- # jq -r . 00:07:00.931 [2024-07-26 10:12:14.140238] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:00.931 [2024-07-26 10:12:14.140381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67973 ] 00:07:00.931 [2024-07-26 10:12:14.276081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.931 [2024-07-26 10:12:14.358552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=0x1 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=0 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=software 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=32 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=32 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=1 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val=Yes 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 10:12:14 -- accel/accel.sh@21 -- # val= 00:07:01.189 10:12:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 10:12:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@21 -- # val= 00:07:02.123 10:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.123 10:12:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.123 10:12:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.123 10:12:15 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:02.123 10:12:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.123 00:07:02.123 real 0m2.879s 00:07:02.123 user 0m2.447s 00:07:02.123 sys 0m0.230s 00:07:02.123 10:12:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.123 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.123 ************************************ 00:07:02.123 END TEST accel_copy_crc32c 00:07:02.123 ************************************ 00:07:02.382 10:12:15 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.382 10:12:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:02.382 10:12:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.382 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.382 ************************************ 00:07:02.382 START TEST accel_copy_crc32c_C2 00:07:02.382 ************************************ 00:07:02.382 10:12:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.382 10:12:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.382 10:12:15 -- accel/accel.sh@17 -- # local accel_module 00:07:02.382 10:12:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:02.382 10:12:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:02.382 10:12:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.383 10:12:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.383 10:12:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.383 10:12:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.383 10:12:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.383 10:12:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.383 10:12:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.383 10:12:15 -- accel/accel.sh@42 -- # jq -r . 00:07:02.383 [2024-07-26 10:12:15.632105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:02.383 [2024-07-26 10:12:15.632333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68013 ] 00:07:02.383 [2024-07-26 10:12:15.764855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.641 [2024-07-26 10:12:15.841113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.016 10:12:17 -- accel/accel.sh@18 -- # out=' 00:07:04.016 SPDK Configuration: 00:07:04.016 Core mask: 0x1 00:07:04.016 00:07:04.016 Accel Perf Configuration: 00:07:04.016 Workload Type: copy_crc32c 00:07:04.016 CRC-32C seed: 0 00:07:04.016 Vector size: 4096 bytes 00:07:04.016 Transfer size: 8192 bytes 00:07:04.016 Vector count 2 00:07:04.016 Module: software 00:07:04.016 Queue depth: 32 00:07:04.016 Allocate depth: 32 00:07:04.016 # threads/core: 1 00:07:04.016 Run time: 1 seconds 00:07:04.016 Verify: Yes 00:07:04.016 00:07:04.016 Running for 1 seconds... 00:07:04.016 00:07:04.016 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.016 ------------------------------------------------------------------------------------ 00:07:04.016 0,0 182336/s 1424 MiB/s 0 0 00:07:04.016 ==================================================================================== 00:07:04.016 Total 182336/s 712 MiB/s 0 0' 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.016 10:12:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.016 10:12:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.016 10:12:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.016 10:12:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.016 10:12:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.016 10:12:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.016 10:12:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.016 10:12:17 -- accel/accel.sh@42 -- # jq -r . 00:07:04.016 [2024-07-26 10:12:17.059362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:04.016 [2024-07-26 10:12:17.059450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68027 ] 00:07:04.016 [2024-07-26 10:12:17.196976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.016 [2024-07-26 10:12:17.281966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val=0x1 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val=0 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.016 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.016 10:12:17 -- accel/accel.sh@21 -- # val=software 00:07:04.016 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.016 10:12:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val=32 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val=32 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val=1 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val=Yes 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.017 10:12:17 -- accel/accel.sh@21 -- # val= 00:07:04.017 10:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.017 10:12:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@21 -- # val= 00:07:05.396 10:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.396 10:12:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.396 10:12:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.396 10:12:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:05.396 10:12:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.396 00:07:05.396 real 0m2.880s 00:07:05.396 user 0m2.452s 00:07:05.396 sys 0m0.228s 00:07:05.396 ************************************ 00:07:05.396 END TEST accel_copy_crc32c_C2 00:07:05.396 ************************************ 00:07:05.396 10:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.396 10:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:05.396 10:12:18 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.396 10:12:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:05.396 10:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.396 10:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:05.396 ************************************ 00:07:05.396 START TEST accel_dualcast 00:07:05.396 ************************************ 00:07:05.396 10:12:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:05.396 10:12:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.396 10:12:18 -- accel/accel.sh@17 -- # local accel_module 00:07:05.396 10:12:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:05.396 10:12:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.396 10:12:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.396 10:12:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.396 10:12:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.396 10:12:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.396 10:12:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.396 10:12:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.396 10:12:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.396 10:12:18 -- accel/accel.sh@42 -- # jq -r . 00:07:05.396 [2024-07-26 10:12:18.561666] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:05.396 [2024-07-26 10:12:18.561754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68062 ] 00:07:05.396 [2024-07-26 10:12:18.691326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.396 [2024-07-26 10:12:18.778394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.775 10:12:19 -- accel/accel.sh@18 -- # out=' 00:07:06.775 SPDK Configuration: 00:07:06.775 Core mask: 0x1 00:07:06.775 00:07:06.775 Accel Perf Configuration: 00:07:06.775 Workload Type: dualcast 00:07:06.775 Transfer size: 4096 bytes 00:07:06.775 Vector count 1 00:07:06.775 Module: software 00:07:06.775 Queue depth: 32 00:07:06.775 Allocate depth: 32 00:07:06.775 # threads/core: 1 00:07:06.775 Run time: 1 seconds 00:07:06.775 Verify: Yes 00:07:06.775 00:07:06.775 Running for 1 seconds... 00:07:06.775 00:07:06.775 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.775 ------------------------------------------------------------------------------------ 00:07:06.775 0,0 347904/s 1359 MiB/s 0 0 00:07:06.775 ==================================================================================== 00:07:06.775 Total 347904/s 1359 MiB/s 0 0' 00:07:06.775 10:12:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.775 10:12:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.775 10:12:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:06.775 10:12:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:06.775 10:12:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.775 10:12:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.775 10:12:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.775 10:12:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.775 10:12:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.775 10:12:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.775 10:12:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.775 10:12:19 -- accel/accel.sh@42 -- # jq -r . 00:07:06.775 [2024-07-26 10:12:20.008315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:06.775 [2024-07-26 10:12:20.008463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68083 ] 00:07:06.775 [2024-07-26 10:12:20.154260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.034 [2024-07-26 10:12:20.243954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val=0x1 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val=dualcast 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val=software 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val=32 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.034 10:12:20 -- accel/accel.sh@21 -- # val=32 00:07:07.034 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.034 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.035 10:12:20 -- accel/accel.sh@21 -- # val=1 00:07:07.035 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.035 10:12:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.035 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.035 10:12:20 -- accel/accel.sh@21 -- # val=Yes 00:07:07.035 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.035 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.035 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.035 10:12:20 -- accel/accel.sh@21 -- # val= 00:07:07.035 10:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.035 10:12:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@21 -- # val= 00:07:08.414 10:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.414 10:12:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.414 10:12:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.414 10:12:21 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:08.414 10:12:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.414 00:07:08.414 real 0m2.910s 00:07:08.414 user 0m2.484s 00:07:08.414 sys 0m0.218s 00:07:08.414 10:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.414 10:12:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.414 ************************************ 00:07:08.414 END TEST accel_dualcast 00:07:08.414 ************************************ 00:07:08.414 10:12:21 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.414 10:12:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:08.414 10:12:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.414 10:12:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.414 ************************************ 00:07:08.414 START TEST accel_compare 00:07:08.414 ************************************ 00:07:08.414 10:12:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:08.414 10:12:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.414 10:12:21 -- accel/accel.sh@17 -- # local accel_module 00:07:08.414 10:12:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:08.414 10:12:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.414 10:12:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.414 10:12:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.414 10:12:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.414 10:12:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.414 10:12:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.414 10:12:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.414 10:12:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.414 10:12:21 -- accel/accel.sh@42 -- # jq -r . 00:07:08.414 [2024-07-26 10:12:21.525288] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:08.414 [2024-07-26 10:12:21.525380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68112 ] 00:07:08.414 [2024-07-26 10:12:21.662864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.414 [2024-07-26 10:12:21.737378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.792 10:12:22 -- accel/accel.sh@18 -- # out=' 00:07:09.792 SPDK Configuration: 00:07:09.792 Core mask: 0x1 00:07:09.792 00:07:09.792 Accel Perf Configuration: 00:07:09.792 Workload Type: compare 00:07:09.792 Transfer size: 4096 bytes 00:07:09.792 Vector count 1 00:07:09.792 Module: software 00:07:09.792 Queue depth: 32 00:07:09.792 Allocate depth: 32 00:07:09.792 # threads/core: 1 00:07:09.792 Run time: 1 seconds 00:07:09.792 Verify: Yes 00:07:09.793 00:07:09.793 Running for 1 seconds... 00:07:09.793 00:07:09.793 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.793 ------------------------------------------------------------------------------------ 00:07:09.793 0,0 466784/s 1823 MiB/s 0 0 00:07:09.793 ==================================================================================== 00:07:09.793 Total 466784/s 1823 MiB/s 0 0' 00:07:09.793 10:12:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:09.793 10:12:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.793 10:12:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.793 10:12:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.793 10:12:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.793 10:12:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.793 10:12:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.793 10:12:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.793 10:12:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.793 10:12:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.793 10:12:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.793 10:12:22 -- accel/accel.sh@42 -- # jq -r . 00:07:09.793 [2024-07-26 10:12:22.962266] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:09.793 [2024-07-26 10:12:22.962373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68137 ] 00:07:09.793 [2024-07-26 10:12:23.093996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.793 [2024-07-26 10:12:23.186215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.793 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:09.793 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.793 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:09.793 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.793 10:12:23 -- accel/accel.sh@21 -- # val=0x1 00:07:09.793 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.793 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:09.793 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:09.793 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:09.793 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:10.051 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.051 10:12:23 -- accel/accel.sh@21 -- # val=compare 00:07:10.051 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.051 10:12:23 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.051 10:12:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.051 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.051 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.051 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:10.051 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val=software 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val=32 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val=32 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val=1 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val=Yes 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.052 10:12:23 -- accel/accel.sh@21 -- # val= 00:07:10.052 10:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.052 10:12:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@21 -- # val= 00:07:10.986 10:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.986 10:12:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.986 10:12:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.986 10:12:24 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:10.986 ************************************ 00:07:10.986 END TEST accel_compare 00:07:10.986 ************************************ 00:07:10.987 10:12:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.987 00:07:10.987 real 0m2.903s 00:07:10.987 user 0m2.471s 00:07:10.987 sys 0m0.225s 00:07:10.987 10:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.987 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.246 10:12:24 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:11.246 10:12:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:11.246 10:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.246 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.246 ************************************ 00:07:11.246 START TEST accel_xor 00:07:11.246 ************************************ 00:07:11.246 10:12:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:11.246 10:12:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.246 10:12:24 -- accel/accel.sh@17 -- # local accel_module 00:07:11.246 10:12:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:11.246 10:12:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.246 10:12:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.246 10:12:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.246 10:12:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.246 10:12:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.246 10:12:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.246 10:12:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.246 10:12:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.246 10:12:24 -- accel/accel.sh@42 -- # jq -r . 00:07:11.246 [2024-07-26 10:12:24.483979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:11.246 [2024-07-26 10:12:24.484121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68166 ] 00:07:11.246 [2024-07-26 10:12:24.633626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.505 [2024-07-26 10:12:24.720178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.884 10:12:25 -- accel/accel.sh@18 -- # out=' 00:07:12.884 SPDK Configuration: 00:07:12.884 Core mask: 0x1 00:07:12.884 00:07:12.884 Accel Perf Configuration: 00:07:12.884 Workload Type: xor 00:07:12.884 Source buffers: 2 00:07:12.884 Transfer size: 4096 bytes 00:07:12.884 Vector count 1 00:07:12.884 Module: software 00:07:12.884 Queue depth: 32 00:07:12.884 Allocate depth: 32 00:07:12.884 # threads/core: 1 00:07:12.884 Run time: 1 seconds 00:07:12.884 Verify: Yes 00:07:12.884 00:07:12.884 Running for 1 seconds... 00:07:12.884 00:07:12.884 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.884 ------------------------------------------------------------------------------------ 00:07:12.884 0,0 260608/s 1018 MiB/s 0 0 00:07:12.884 ==================================================================================== 00:07:12.884 Total 260608/s 1018 MiB/s 0 0' 00:07:12.884 10:12:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:12.884 10:12:25 -- accel/accel.sh@20 -- # IFS=: 00:07:12.884 10:12:25 -- accel/accel.sh@20 -- # read -r var val 00:07:12.884 10:12:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:12.884 10:12:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.884 10:12:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.884 10:12:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.884 10:12:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.884 10:12:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.884 10:12:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.885 10:12:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.885 10:12:25 -- accel/accel.sh@42 -- # jq -r . 00:07:12.885 [2024-07-26 10:12:25.938288] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:12.885 [2024-07-26 10:12:25.938363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68191 ] 00:07:12.885 [2024-07-26 10:12:26.069249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.885 [2024-07-26 10:12:26.144070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=0x1 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=xor 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=2 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=software 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=32 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=32 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=1 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val=Yes 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.885 10:12:26 -- accel/accel.sh@21 -- # val= 00:07:12.885 10:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.885 10:12:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@21 -- # val= 00:07:14.262 10:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.262 10:12:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.262 10:12:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.262 10:12:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:14.262 10:12:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.262 ************************************ 00:07:14.262 END TEST accel_xor 00:07:14.262 ************************************ 00:07:14.262 00:07:14.262 real 0m2.899s 00:07:14.262 user 0m2.449s 00:07:14.262 sys 0m0.241s 00:07:14.262 10:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.262 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:14.262 10:12:27 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:14.262 10:12:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:14.262 10:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.262 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:14.262 ************************************ 00:07:14.262 START TEST accel_xor 00:07:14.262 ************************************ 00:07:14.262 10:12:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:14.262 10:12:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.262 10:12:27 -- accel/accel.sh@17 -- # local accel_module 00:07:14.262 10:12:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:14.262 10:12:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:14.262 10:12:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.262 10:12:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.262 10:12:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.262 10:12:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.262 10:12:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.263 10:12:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.263 10:12:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.263 10:12:27 -- accel/accel.sh@42 -- # jq -r . 00:07:14.263 [2024-07-26 10:12:27.421630] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:14.263 [2024-07-26 10:12:27.421701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68220 ] 00:07:14.263 [2024-07-26 10:12:27.550467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.263 [2024-07-26 10:12:27.626485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.637 10:12:28 -- accel/accel.sh@18 -- # out=' 00:07:15.637 SPDK Configuration: 00:07:15.637 Core mask: 0x1 00:07:15.637 00:07:15.637 Accel Perf Configuration: 00:07:15.637 Workload Type: xor 00:07:15.637 Source buffers: 3 00:07:15.637 Transfer size: 4096 bytes 00:07:15.637 Vector count 1 00:07:15.637 Module: software 00:07:15.637 Queue depth: 32 00:07:15.637 Allocate depth: 32 00:07:15.637 # threads/core: 1 00:07:15.637 Run time: 1 seconds 00:07:15.637 Verify: Yes 00:07:15.637 00:07:15.637 Running for 1 seconds... 00:07:15.637 00:07:15.637 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.637 ------------------------------------------------------------------------------------ 00:07:15.637 0,0 245184/s 957 MiB/s 0 0 00:07:15.637 ==================================================================================== 00:07:15.637 Total 245184/s 957 MiB/s 0 0' 00:07:15.637 10:12:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:15.637 10:12:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.637 10:12:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.637 10:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:15.637 10:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.637 10:12:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.637 10:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.637 10:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.637 10:12:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.637 10:12:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.637 10:12:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.637 10:12:28 -- accel/accel.sh@42 -- # jq -r . 00:07:15.637 [2024-07-26 10:12:28.845425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:15.637 [2024-07-26 10:12:28.845510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68239 ] 00:07:15.637 [2024-07-26 10:12:28.975608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.637 [2024-07-26 10:12:29.065993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=0x1 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=xor 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=3 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=software 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=32 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=32 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=1 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val=Yes 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:15.896 10:12:29 -- accel/accel.sh@21 -- # val= 00:07:15.896 10:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # IFS=: 00:07:15.896 10:12:29 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@21 -- # val= 00:07:16.830 10:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # IFS=: 00:07:16.830 10:12:30 -- accel/accel.sh@20 -- # read -r var val 00:07:16.830 10:12:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.830 10:12:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:16.830 10:12:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.830 00:07:16.830 real 0m2.870s 00:07:16.830 user 0m2.462s 00:07:16.830 sys 0m0.202s 00:07:16.830 10:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.830 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.830 ************************************ 00:07:16.830 END TEST accel_xor 00:07:16.830 ************************************ 00:07:17.096 10:12:30 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:17.096 10:12:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:17.096 10:12:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.096 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 ************************************ 00:07:17.096 START TEST accel_dif_verify 00:07:17.096 ************************************ 00:07:17.096 10:12:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:17.096 10:12:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.096 10:12:30 -- accel/accel.sh@17 -- # local accel_module 00:07:17.096 10:12:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:17.096 10:12:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:17.096 10:12:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.096 10:12:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.096 10:12:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.096 10:12:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.096 10:12:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.096 10:12:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.097 10:12:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.097 10:12:30 -- accel/accel.sh@42 -- # jq -r . 00:07:17.097 [2024-07-26 10:12:30.349393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:17.097 [2024-07-26 10:12:30.349480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68274 ] 00:07:17.097 [2024-07-26 10:12:30.485700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.370 [2024-07-26 10:12:30.563841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.745 10:12:31 -- accel/accel.sh@18 -- # out=' 00:07:18.745 SPDK Configuration: 00:07:18.745 Core mask: 0x1 00:07:18.745 00:07:18.745 Accel Perf Configuration: 00:07:18.745 Workload Type: dif_verify 00:07:18.745 Vector size: 4096 bytes 00:07:18.745 Transfer size: 4096 bytes 00:07:18.745 Block size: 512 bytes 00:07:18.745 Metadata size: 8 bytes 00:07:18.745 Vector count 1 00:07:18.745 Module: software 00:07:18.745 Queue depth: 32 00:07:18.745 Allocate depth: 32 00:07:18.745 # threads/core: 1 00:07:18.745 Run time: 1 seconds 00:07:18.745 Verify: No 00:07:18.745 00:07:18.745 Running for 1 seconds... 00:07:18.745 00:07:18.745 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.745 ------------------------------------------------------------------------------------ 00:07:18.745 0,0 97344/s 386 MiB/s 0 0 00:07:18.745 ==================================================================================== 00:07:18.745 Total 97344/s 380 MiB/s 0 0' 00:07:18.745 10:12:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:18.745 10:12:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:18.745 10:12:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.745 10:12:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.745 10:12:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.745 10:12:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.745 10:12:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.745 10:12:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.745 10:12:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.745 10:12:31 -- accel/accel.sh@42 -- # jq -r . 00:07:18.745 [2024-07-26 10:12:31.798541] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:18.745 [2024-07-26 10:12:31.798716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68288 ] 00:07:18.745 [2024-07-26 10:12:31.932505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.745 [2024-07-26 10:12:32.022512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val=0x1 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val=dif_verify 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.745 10:12:32 -- accel/accel.sh@21 -- # val=software 00:07:18.745 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.745 10:12:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.745 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val=32 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val=32 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val=1 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val=No 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.746 10:12:32 -- accel/accel.sh@21 -- # val= 00:07:18.746 10:12:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.746 10:12:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@21 -- # val= 00:07:20.121 10:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.121 10:12:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.121 10:12:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.121 10:12:33 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:20.121 10:12:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.121 00:07:20.121 real 0m2.907s 00:07:20.121 user 0m2.487s 00:07:20.121 sys 0m0.218s 00:07:20.121 10:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.121 ************************************ 00:07:20.121 END TEST accel_dif_verify 00:07:20.121 ************************************ 00:07:20.121 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:07:20.121 10:12:33 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.121 10:12:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:20.121 10:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.121 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:07:20.121 ************************************ 00:07:20.121 START TEST accel_dif_generate 00:07:20.121 ************************************ 00:07:20.121 10:12:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:20.121 10:12:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.121 10:12:33 -- accel/accel.sh@17 -- # local accel_module 00:07:20.121 10:12:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:20.121 10:12:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.121 10:12:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.121 10:12:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.121 10:12:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.121 10:12:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.121 10:12:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.121 10:12:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.121 10:12:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.121 10:12:33 -- accel/accel.sh@42 -- # jq -r . 00:07:20.121 [2024-07-26 10:12:33.299791] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:20.121 [2024-07-26 10:12:33.299888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68328 ] 00:07:20.121 [2024-07-26 10:12:33.437100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.121 [2024-07-26 10:12:33.526254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.497 10:12:34 -- accel/accel.sh@18 -- # out=' 00:07:21.497 SPDK Configuration: 00:07:21.497 Core mask: 0x1 00:07:21.497 00:07:21.497 Accel Perf Configuration: 00:07:21.497 Workload Type: dif_generate 00:07:21.497 Vector size: 4096 bytes 00:07:21.497 Transfer size: 4096 bytes 00:07:21.497 Block size: 512 bytes 00:07:21.497 Metadata size: 8 bytes 00:07:21.497 Vector count 1 00:07:21.497 Module: software 00:07:21.497 Queue depth: 32 00:07:21.497 Allocate depth: 32 00:07:21.497 # threads/core: 1 00:07:21.497 Run time: 1 seconds 00:07:21.497 Verify: No 00:07:21.497 00:07:21.497 Running for 1 seconds... 00:07:21.497 00:07:21.497 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.497 ------------------------------------------------------------------------------------ 00:07:21.497 0,0 126336/s 501 MiB/s 0 0 00:07:21.497 ==================================================================================== 00:07:21.497 Total 126336/s 493 MiB/s 0 0' 00:07:21.497 10:12:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.497 10:12:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.497 10:12:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:21.497 10:12:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:21.497 10:12:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.497 10:12:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.497 10:12:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.497 10:12:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.497 10:12:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.497 10:12:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.497 10:12:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.497 10:12:34 -- accel/accel.sh@42 -- # jq -r . 00:07:21.497 [2024-07-26 10:12:34.757299] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:21.497 [2024-07-26 10:12:34.757400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68342 ] 00:07:21.497 [2024-07-26 10:12:34.894315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.756 [2024-07-26 10:12:34.979361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val=0x1 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.756 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.756 10:12:35 -- accel/accel.sh@21 -- # val=dif_generate 00:07:21.756 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.756 10:12:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val=software 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val=32 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val=32 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val=1 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val=No 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.757 10:12:35 -- accel/accel.sh@21 -- # val= 00:07:21.757 10:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.757 10:12:35 -- accel/accel.sh@20 -- # read -r var val 00:07:23.138 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.138 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.138 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.138 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.138 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.138 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.138 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.138 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.138 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.138 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.138 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.139 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.139 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.139 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.139 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.139 10:12:36 -- accel/accel.sh@21 -- # val= 00:07:23.139 10:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.139 10:12:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.139 10:12:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.139 10:12:36 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:23.139 10:12:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.139 00:07:23.139 real 0m2.911s 00:07:23.139 user 0m2.481s 00:07:23.139 sys 0m0.228s 00:07:23.139 10:12:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.139 ************************************ 00:07:23.139 END TEST accel_dif_generate 00:07:23.139 ************************************ 00:07:23.139 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:07:23.139 10:12:36 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:23.139 10:12:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:23.139 10:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.139 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:07:23.139 ************************************ 00:07:23.139 START TEST accel_dif_generate_copy 00:07:23.139 ************************************ 00:07:23.139 10:12:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:23.139 10:12:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.139 10:12:36 -- accel/accel.sh@17 -- # local accel_module 00:07:23.139 10:12:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.139 10:12:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.139 10:12:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.139 10:12:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.139 10:12:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.139 10:12:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.139 10:12:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.139 10:12:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.139 10:12:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.139 10:12:36 -- accel/accel.sh@42 -- # jq -r . 00:07:23.139 [2024-07-26 10:12:36.264599] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:23.139 [2024-07-26 10:12:36.264703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68377 ] 00:07:23.139 [2024-07-26 10:12:36.397901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.139 [2024-07-26 10:12:36.474851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.515 10:12:37 -- accel/accel.sh@18 -- # out=' 00:07:24.515 SPDK Configuration: 00:07:24.515 Core mask: 0x1 00:07:24.515 00:07:24.515 Accel Perf Configuration: 00:07:24.515 Workload Type: dif_generate_copy 00:07:24.515 Vector size: 4096 bytes 00:07:24.515 Transfer size: 4096 bytes 00:07:24.515 Vector count 1 00:07:24.515 Module: software 00:07:24.515 Queue depth: 32 00:07:24.515 Allocate depth: 32 00:07:24.515 # threads/core: 1 00:07:24.515 Run time: 1 seconds 00:07:24.515 Verify: No 00:07:24.515 00:07:24.515 Running for 1 seconds... 00:07:24.515 00:07:24.515 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.515 ------------------------------------------------------------------------------------ 00:07:24.515 0,0 92192/s 365 MiB/s 0 0 00:07:24.515 ==================================================================================== 00:07:24.515 Total 92192/s 360 MiB/s 0 0' 00:07:24.515 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.515 10:12:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:24.515 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.515 10:12:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:24.515 10:12:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.515 10:12:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.515 10:12:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.515 10:12:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.515 10:12:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.515 10:12:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.515 10:12:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.515 10:12:37 -- accel/accel.sh@42 -- # jq -r . 00:07:24.515 [2024-07-26 10:12:37.711072] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:24.515 [2024-07-26 10:12:37.711177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68396 ] 00:07:24.515 [2024-07-26 10:12:37.847725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.515 [2024-07-26 10:12:37.939572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val=0x1 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.774 10:12:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val=software 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val=32 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val=32 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val=1 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val=No 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.774 10:12:38 -- accel/accel.sh@21 -- # val= 00:07:24.774 10:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.774 10:12:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.710 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.710 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.710 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.710 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.710 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.710 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.710 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.710 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.710 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.710 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.711 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.711 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.711 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.711 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.711 10:12:39 -- accel/accel.sh@21 -- # val= 00:07:25.711 10:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # IFS=: 00:07:25.711 10:12:39 -- accel/accel.sh@20 -- # read -r var val 00:07:25.711 10:12:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.711 10:12:39 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:25.711 10:12:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.711 00:07:25.711 real 0m2.915s 00:07:25.711 user 0m2.480s 00:07:25.711 sys 0m0.226s 00:07:25.711 10:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.711 ************************************ 00:07:25.711 END TEST accel_dif_generate_copy 00:07:25.711 ************************************ 00:07:25.711 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 10:12:39 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:25.969 10:12:39 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.969 10:12:39 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:25.969 10:12:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.969 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 ************************************ 00:07:25.969 START TEST accel_comp 00:07:25.969 ************************************ 00:07:25.969 10:12:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.969 10:12:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.969 10:12:39 -- accel/accel.sh@17 -- # local accel_module 00:07:25.969 10:12:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.969 10:12:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.969 10:12:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.969 10:12:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.969 10:12:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.969 10:12:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.969 10:12:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.969 10:12:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.969 10:12:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.969 10:12:39 -- accel/accel.sh@42 -- # jq -r . 00:07:25.969 [2024-07-26 10:12:39.229547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:25.970 [2024-07-26 10:12:39.229822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68431 ] 00:07:25.970 [2024-07-26 10:12:39.364498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.228 [2024-07-26 10:12:39.443361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.604 10:12:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.604 00:07:27.604 SPDK Configuration: 00:07:27.604 Core mask: 0x1 00:07:27.604 00:07:27.604 Accel Perf Configuration: 00:07:27.604 Workload Type: compress 00:07:27.604 Transfer size: 4096 bytes 00:07:27.604 Vector count 1 00:07:27.604 Module: software 00:07:27.604 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.604 Queue depth: 32 00:07:27.604 Allocate depth: 32 00:07:27.604 # threads/core: 1 00:07:27.604 Run time: 1 seconds 00:07:27.604 Verify: No 00:07:27.604 00:07:27.604 Running for 1 seconds... 00:07:27.604 00:07:27.604 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.604 ------------------------------------------------------------------------------------ 00:07:27.604 0,0 46176/s 192 MiB/s 0 0 00:07:27.604 ==================================================================================== 00:07:27.604 Total 46176/s 180 MiB/s 0 0' 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.604 10:12:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.604 10:12:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.604 10:12:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.604 10:12:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.604 10:12:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.604 10:12:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.604 10:12:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.604 10:12:40 -- accel/accel.sh@42 -- # jq -r . 00:07:27.604 [2024-07-26 10:12:40.669295] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:27.604 [2024-07-26 10:12:40.669387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68450 ] 00:07:27.604 [2024-07-26 10:12:40.802420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.604 [2024-07-26 10:12:40.880890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=0x1 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=compress 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=software 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=32 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=32 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=1 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val=No 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.604 10:12:40 -- accel/accel.sh@21 -- # val= 00:07:27.604 10:12:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.604 10:12:40 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 ************************************ 00:07:28.982 END TEST accel_comp 00:07:28.982 ************************************ 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@21 -- # val= 00:07:28.982 10:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.982 10:12:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.982 10:12:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.982 10:12:42 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:28.982 10:12:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.982 00:07:28.982 real 0m2.886s 00:07:28.982 user 0m2.451s 00:07:28.982 sys 0m0.230s 00:07:28.982 10:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.982 10:12:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 10:12:42 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.982 10:12:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:28.982 10:12:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.982 10:12:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 ************************************ 00:07:28.982 START TEST accel_decomp 00:07:28.982 ************************************ 00:07:28.982 10:12:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.982 10:12:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.982 10:12:42 -- accel/accel.sh@17 -- # local accel_module 00:07:28.982 10:12:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.982 10:12:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:28.982 10:12:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.982 10:12:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.982 10:12:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.982 10:12:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.982 10:12:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.982 10:12:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.982 10:12:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.982 10:12:42 -- accel/accel.sh@42 -- # jq -r . 00:07:28.982 [2024-07-26 10:12:42.167071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:28.982 [2024-07-26 10:12:42.167183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68485 ] 00:07:28.982 [2024-07-26 10:12:42.302761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.982 [2024-07-26 10:12:42.378950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.370 10:12:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.370 00:07:30.370 SPDK Configuration: 00:07:30.370 Core mask: 0x1 00:07:30.370 00:07:30.370 Accel Perf Configuration: 00:07:30.370 Workload Type: decompress 00:07:30.370 Transfer size: 4096 bytes 00:07:30.370 Vector count 1 00:07:30.370 Module: software 00:07:30.370 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.370 Queue depth: 32 00:07:30.370 Allocate depth: 32 00:07:30.370 # threads/core: 1 00:07:30.370 Run time: 1 seconds 00:07:30.370 Verify: Yes 00:07:30.370 00:07:30.370 Running for 1 seconds... 00:07:30.370 00:07:30.370 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.370 ------------------------------------------------------------------------------------ 00:07:30.370 0,0 66240/s 122 MiB/s 0 0 00:07:30.370 ==================================================================================== 00:07:30.370 Total 66240/s 258 MiB/s 0 0' 00:07:30.370 10:12:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.370 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.370 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.370 10:12:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.370 10:12:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.370 10:12:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.370 10:12:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.370 10:12:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.370 10:12:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.370 10:12:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.370 10:12:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.370 10:12:43 -- accel/accel.sh@42 -- # jq -r . 00:07:30.370 [2024-07-26 10:12:43.595435] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:30.370 [2024-07-26 10:12:43.595511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68499 ] 00:07:30.370 [2024-07-26 10:12:43.726707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.370 [2024-07-26 10:12:43.811629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=0x1 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=decompress 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=software 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=32 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=32 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=1 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val=Yes 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.629 10:12:43 -- accel/accel.sh@21 -- # val= 00:07:30.629 10:12:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.629 10:12:43 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@21 -- # val= 00:07:31.565 10:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.565 10:12:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.565 10:12:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.565 10:12:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.565 10:12:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.565 00:07:31.565 real 0m2.874s 00:07:31.565 user 0m2.441s 00:07:31.565 sys 0m0.225s 00:07:31.565 10:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.565 10:12:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.565 ************************************ 00:07:31.565 END TEST accel_decomp 00:07:31.565 ************************************ 00:07:31.824 10:12:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.824 10:12:45 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:31.824 10:12:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.824 10:12:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.824 ************************************ 00:07:31.824 START TEST accel_decmop_full 00:07:31.824 ************************************ 00:07:31.824 10:12:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.824 10:12:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.824 10:12:45 -- accel/accel.sh@17 -- # local accel_module 00:07:31.824 10:12:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.824 10:12:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.824 10:12:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.824 10:12:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.824 10:12:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.824 10:12:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.824 10:12:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.824 10:12:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.824 10:12:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.824 10:12:45 -- accel/accel.sh@42 -- # jq -r . 00:07:31.824 [2024-07-26 10:12:45.089375] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:31.824 [2024-07-26 10:12:45.089468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68533 ] 00:07:31.824 [2024-07-26 10:12:45.219491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.083 [2024-07-26 10:12:45.291502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.465 10:12:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.465 00:07:33.465 SPDK Configuration: 00:07:33.465 Core mask: 0x1 00:07:33.465 00:07:33.465 Accel Perf Configuration: 00:07:33.465 Workload Type: decompress 00:07:33.465 Transfer size: 111250 bytes 00:07:33.465 Vector count 1 00:07:33.465 Module: software 00:07:33.465 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.465 Queue depth: 32 00:07:33.465 Allocate depth: 32 00:07:33.465 # threads/core: 1 00:07:33.465 Run time: 1 seconds 00:07:33.465 Verify: Yes 00:07:33.465 00:07:33.465 Running for 1 seconds... 00:07:33.465 00:07:33.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.465 ------------------------------------------------------------------------------------ 00:07:33.465 0,0 4768/s 196 MiB/s 0 0 00:07:33.465 ==================================================================================== 00:07:33.465 Total 4768/s 505 MiB/s 0 0' 00:07:33.465 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.465 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.465 10:12:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:33.465 10:12:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.465 10:12:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:33.465 10:12:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.465 10:12:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.465 10:12:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.465 10:12:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.465 10:12:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.465 10:12:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.466 10:12:46 -- accel/accel.sh@42 -- # jq -r . 00:07:33.466 [2024-07-26 10:12:46.535826] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:33.466 [2024-07-26 10:12:46.536704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68553 ] 00:07:33.466 [2024-07-26 10:12:46.666560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.466 [2024-07-26 10:12:46.757322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=0x1 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=decompress 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=software 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=32 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=32 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=1 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val=Yes 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.466 10:12:46 -- accel/accel.sh@21 -- # val= 00:07:33.466 10:12:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.466 10:12:46 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 ************************************ 00:07:34.842 END TEST accel_decmop_full 00:07:34.842 ************************************ 00:07:34.842 10:12:47 -- accel/accel.sh@21 -- # val= 00:07:34.842 10:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.842 10:12:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.842 10:12:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.842 10:12:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.842 10:12:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.842 00:07:34.842 real 0m2.906s 00:07:34.842 user 0m2.485s 00:07:34.842 sys 0m0.216s 00:07:34.842 10:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.842 10:12:47 -- common/autotest_common.sh@10 -- # set +x 00:07:34.842 10:12:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.842 10:12:48 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:34.842 10:12:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.842 10:12:48 -- common/autotest_common.sh@10 -- # set +x 00:07:34.842 ************************************ 00:07:34.842 START TEST accel_decomp_mcore 00:07:34.842 ************************************ 00:07:34.842 10:12:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.842 10:12:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.842 10:12:48 -- accel/accel.sh@17 -- # local accel_module 00:07:34.842 10:12:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.842 10:12:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.842 10:12:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:34.842 10:12:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.842 10:12:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.842 10:12:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.842 10:12:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.842 10:12:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.842 10:12:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.842 10:12:48 -- accel/accel.sh@42 -- # jq -r . 00:07:34.842 [2024-07-26 10:12:48.035012] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:34.843 [2024-07-26 10:12:48.035100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68586 ] 00:07:34.843 [2024-07-26 10:12:48.170970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.843 [2024-07-26 10:12:48.262498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.843 [2024-07-26 10:12:48.262632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.843 [2024-07-26 10:12:48.262682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.843 [2024-07-26 10:12:48.262687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.225 10:12:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.225 00:07:36.225 SPDK Configuration: 00:07:36.225 Core mask: 0xf 00:07:36.225 00:07:36.225 Accel Perf Configuration: 00:07:36.225 Workload Type: decompress 00:07:36.225 Transfer size: 4096 bytes 00:07:36.225 Vector count 1 00:07:36.225 Module: software 00:07:36.225 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.225 Queue depth: 32 00:07:36.225 Allocate depth: 32 00:07:36.225 # threads/core: 1 00:07:36.225 Run time: 1 seconds 00:07:36.225 Verify: Yes 00:07:36.225 00:07:36.225 Running for 1 seconds... 00:07:36.225 00:07:36.225 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.225 ------------------------------------------------------------------------------------ 00:07:36.225 0,0 59232/s 109 MiB/s 0 0 00:07:36.225 3,0 59104/s 108 MiB/s 0 0 00:07:36.225 2,0 59296/s 109 MiB/s 0 0 00:07:36.225 1,0 58784/s 108 MiB/s 0 0 00:07:36.225 ==================================================================================== 00:07:36.225 Total 236416/s 923 MiB/s 0 0' 00:07:36.225 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.225 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.225 10:12:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.225 10:12:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.225 10:12:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.225 10:12:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.225 10:12:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.225 10:12:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.225 10:12:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.225 10:12:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.225 10:12:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.225 10:12:49 -- accel/accel.sh@42 -- # jq -r . 00:07:36.225 [2024-07-26 10:12:49.510795] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:36.225 [2024-07-26 10:12:49.510868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68610 ] 00:07:36.225 [2024-07-26 10:12:49.642660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.486 [2024-07-26 10:12:49.724409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.486 [2024-07-26 10:12:49.724543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.486 [2024-07-26 10:12:49.724626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.486 [2024-07-26 10:12:49.724894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val=0xf 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val=decompress 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.486 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.486 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.486 10:12:49 -- accel/accel.sh@21 -- # val=software 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val=32 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val=32 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val=1 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val=Yes 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.487 10:12:49 -- accel/accel.sh@21 -- # val= 00:07:36.487 10:12:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.487 10:12:49 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@21 -- # val= 00:07:37.869 10:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.869 10:12:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.869 10:12:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.869 10:12:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.869 10:12:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.869 00:07:37.869 real 0m2.925s 00:07:37.869 user 0m9.332s 00:07:37.869 sys 0m0.235s 00:07:37.869 10:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.869 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:07:37.869 ************************************ 00:07:37.869 END TEST accel_decomp_mcore 00:07:37.869 ************************************ 00:07:37.869 10:12:50 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.869 10:12:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:37.869 10:12:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.869 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:07:37.869 ************************************ 00:07:37.869 START TEST accel_decomp_full_mcore 00:07:37.869 ************************************ 00:07:37.869 10:12:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.869 10:12:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.869 10:12:50 -- accel/accel.sh@17 -- # local accel_module 00:07:37.869 10:12:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.869 10:12:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.869 10:12:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.869 10:12:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.869 10:12:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.869 10:12:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.869 10:12:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.869 10:12:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.869 10:12:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.869 10:12:50 -- accel/accel.sh@42 -- # jq -r . 00:07:37.869 [2024-07-26 10:12:51.001071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:37.869 [2024-07-26 10:12:51.001148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68642 ] 00:07:37.869 [2024-07-26 10:12:51.135452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.869 [2024-07-26 10:12:51.229654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.869 [2024-07-26 10:12:51.229807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.869 [2024-07-26 10:12:51.230097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.869 [2024-07-26 10:12:51.230113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.257 10:12:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.257 00:07:39.257 SPDK Configuration: 00:07:39.257 Core mask: 0xf 00:07:39.257 00:07:39.257 Accel Perf Configuration: 00:07:39.257 Workload Type: decompress 00:07:39.257 Transfer size: 111250 bytes 00:07:39.257 Vector count 1 00:07:39.257 Module: software 00:07:39.257 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.257 Queue depth: 32 00:07:39.257 Allocate depth: 32 00:07:39.257 # threads/core: 1 00:07:39.257 Run time: 1 seconds 00:07:39.257 Verify: Yes 00:07:39.257 00:07:39.257 Running for 1 seconds... 00:07:39.257 00:07:39.257 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.257 ------------------------------------------------------------------------------------ 00:07:39.257 0,0 4544/s 187 MiB/s 0 0 00:07:39.257 3,0 4544/s 187 MiB/s 0 0 00:07:39.257 2,0 4512/s 186 MiB/s 0 0 00:07:39.257 1,0 4544/s 187 MiB/s 0 0 00:07:39.257 ==================================================================================== 00:07:39.257 Total 18144/s 1925 MiB/s 0 0' 00:07:39.257 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.257 10:12:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.257 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.257 10:12:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.257 10:12:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.257 10:12:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.257 10:12:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.257 10:12:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.257 10:12:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.257 10:12:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.257 10:12:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.257 10:12:52 -- accel/accel.sh@42 -- # jq -r . 00:07:39.257 [2024-07-26 10:12:52.457892] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:39.257 [2024-07-26 10:12:52.458159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68670 ] 00:07:39.257 [2024-07-26 10:12:52.590575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.257 [2024-07-26 10:12:52.681613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.257 [2024-07-26 10:12:52.681731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.257 [2024-07-26 10:12:52.681833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.257 [2024-07-26 10:12:52.681836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.515 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.515 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.515 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.515 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.515 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.515 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.515 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.515 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.515 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=0xf 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=decompress 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=software 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=32 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=32 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=1 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val=Yes 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 10:12:52 -- accel/accel.sh@21 -- # val= 00:07:39.516 10:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 10:12:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@21 -- # val= 00:07:40.890 10:12:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.890 ************************************ 00:07:40.890 END TEST accel_decomp_full_mcore 00:07:40.890 ************************************ 00:07:40.890 10:12:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.890 10:12:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.891 10:12:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.891 10:12:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.891 00:07:40.891 real 0m2.976s 00:07:40.891 user 0m9.376s 00:07:40.891 sys 0m0.253s 00:07:40.891 10:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.891 10:12:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.891 10:12:53 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.891 10:12:53 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.891 10:12:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.891 10:12:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.891 ************************************ 00:07:40.891 START TEST accel_decomp_mthread 00:07:40.891 ************************************ 00:07:40.891 10:12:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.891 10:12:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.891 10:12:54 -- accel/accel.sh@17 -- # local accel_module 00:07:40.891 10:12:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.891 10:12:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.891 10:12:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.891 10:12:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.891 10:12:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.891 10:12:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.891 10:12:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.891 10:12:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.891 10:12:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.891 10:12:54 -- accel/accel.sh@42 -- # jq -r . 00:07:40.891 [2024-07-26 10:12:54.031010] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:40.891 [2024-07-26 10:12:54.031108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68702 ] 00:07:40.891 [2024-07-26 10:12:54.168700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.891 [2024-07-26 10:12:54.269749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.266 10:12:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.266 00:07:42.266 SPDK Configuration: 00:07:42.266 Core mask: 0x1 00:07:42.266 00:07:42.266 Accel Perf Configuration: 00:07:42.266 Workload Type: decompress 00:07:42.266 Transfer size: 4096 bytes 00:07:42.266 Vector count 1 00:07:42.266 Module: software 00:07:42.266 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.266 Queue depth: 32 00:07:42.266 Allocate depth: 32 00:07:42.266 # threads/core: 2 00:07:42.266 Run time: 1 seconds 00:07:42.266 Verify: Yes 00:07:42.266 00:07:42.266 Running for 1 seconds... 00:07:42.266 00:07:42.266 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.266 ------------------------------------------------------------------------------------ 00:07:42.266 0,1 33344/s 61 MiB/s 0 0 00:07:42.266 0,0 33216/s 61 MiB/s 0 0 00:07:42.266 ==================================================================================== 00:07:42.266 Total 66560/s 260 MiB/s 0 0' 00:07:42.266 10:12:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.266 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.266 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.266 10:12:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.266 10:12:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.266 10:12:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.266 10:12:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.266 10:12:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.266 10:12:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.266 10:12:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.266 10:12:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.266 10:12:55 -- accel/accel.sh@42 -- # jq -r . 00:07:42.266 [2024-07-26 10:12:55.517115] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:42.266 [2024-07-26 10:12:55.518208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68727 ] 00:07:42.266 [2024-07-26 10:12:55.654420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.524 [2024-07-26 10:12:55.750745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val=0x1 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val=decompress 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.524 10:12:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.524 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.524 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=software 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=32 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=32 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=2 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val=Yes 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.525 10:12:55 -- accel/accel.sh@21 -- # val= 00:07:42.525 10:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.525 10:12:55 -- accel/accel.sh@20 -- # read -r var val 00:07:43.924 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.924 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.924 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.924 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.924 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.924 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.924 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.924 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.924 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.924 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.925 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.925 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.925 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.925 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.925 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.925 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.925 10:12:56 -- accel/accel.sh@21 -- # val= 00:07:43.925 10:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # IFS=: 00:07:43.925 ************************************ 00:07:43.925 END TEST accel_decomp_mthread 00:07:43.925 ************************************ 00:07:43.925 10:12:56 -- accel/accel.sh@20 -- # read -r var val 00:07:43.925 10:12:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.925 10:12:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.925 10:12:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.925 00:07:43.925 real 0m2.968s 00:07:43.925 user 0m2.529s 00:07:43.925 sys 0m0.233s 00:07:43.925 10:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.925 10:12:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.925 10:12:57 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.925 10:12:57 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:43.925 10:12:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.925 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.925 ************************************ 00:07:43.925 START TEST accel_deomp_full_mthread 00:07:43.925 ************************************ 00:07:43.925 10:12:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.925 10:12:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.925 10:12:57 -- accel/accel.sh@17 -- # local accel_module 00:07:43.925 10:12:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.925 10:12:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.925 10:12:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.925 10:12:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.925 10:12:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.925 10:12:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.925 10:12:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.925 10:12:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.925 10:12:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.925 10:12:57 -- accel/accel.sh@42 -- # jq -r . 00:07:43.925 [2024-07-26 10:12:57.050538] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:43.925 [2024-07-26 10:12:57.050668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68756 ] 00:07:43.925 [2024-07-26 10:12:57.189125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.925 [2024-07-26 10:12:57.290170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.301 10:12:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.301 00:07:45.301 SPDK Configuration: 00:07:45.301 Core mask: 0x1 00:07:45.301 00:07:45.301 Accel Perf Configuration: 00:07:45.301 Workload Type: decompress 00:07:45.301 Transfer size: 111250 bytes 00:07:45.301 Vector count 1 00:07:45.301 Module: software 00:07:45.301 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.301 Queue depth: 32 00:07:45.301 Allocate depth: 32 00:07:45.301 # threads/core: 2 00:07:45.301 Run time: 1 seconds 00:07:45.301 Verify: Yes 00:07:45.301 00:07:45.301 Running for 1 seconds... 00:07:45.301 00:07:45.301 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.301 ------------------------------------------------------------------------------------ 00:07:45.301 0,1 2272/s 93 MiB/s 0 0 00:07:45.301 0,0 2272/s 93 MiB/s 0 0 00:07:45.301 ==================================================================================== 00:07:45.301 Total 4544/s 482 MiB/s 0 0' 00:07:45.301 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.301 10:12:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.301 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.301 10:12:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.301 10:12:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.301 10:12:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.301 10:12:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.301 10:12:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.301 10:12:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.301 10:12:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.301 10:12:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.301 10:12:58 -- accel/accel.sh@42 -- # jq -r . 00:07:45.301 [2024-07-26 10:12:58.580652] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:45.301 [2024-07-26 10:12:58.580762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68781 ] 00:07:45.301 [2024-07-26 10:12:58.715391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.560 [2024-07-26 10:12:58.810751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=0x1 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=decompress 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=software 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=32 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=32 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=2 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.560 10:12:58 -- accel/accel.sh@21 -- # val=Yes 00:07:45.560 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.560 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.561 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.561 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.561 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.561 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.561 10:12:58 -- accel/accel.sh@21 -- # val= 00:07:45.561 10:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.561 10:12:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.561 10:12:58 -- accel/accel.sh@20 -- # read -r var val 00:07:46.933 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.933 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@21 -- # val= 00:07:46.934 10:13:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # IFS=: 00:07:46.934 10:13:00 -- accel/accel.sh@20 -- # read -r var val 00:07:46.934 10:13:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.934 10:13:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.934 10:13:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.934 00:07:46.934 real 0m3.051s 00:07:46.934 user 0m2.619s 00:07:46.934 sys 0m0.228s 00:07:46.934 10:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.934 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.934 ************************************ 00:07:46.934 END TEST accel_deomp_full_mthread 00:07:46.934 ************************************ 00:07:46.934 10:13:00 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:46.934 10:13:00 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.934 10:13:00 -- accel/accel.sh@129 -- # build_accel_config 00:07:46.934 10:13:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.934 10:13:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:46.934 10:13:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.934 10:13:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.934 10:13:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.934 10:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.934 10:13:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.934 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.934 10:13:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.934 10:13:00 -- accel/accel.sh@42 -- # jq -r . 00:07:46.934 ************************************ 00:07:46.934 START TEST accel_dif_functional_tests 00:07:46.934 ************************************ 00:07:46.934 10:13:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.934 [2024-07-26 10:13:00.174038] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:46.934 [2024-07-26 10:13:00.174123] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68811 ] 00:07:46.934 [2024-07-26 10:13:00.306331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.192 [2024-07-26 10:13:00.407799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.192 [2024-07-26 10:13:00.407877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.192 [2024-07-26 10:13:00.407880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.192 00:07:47.192 00:07:47.192 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.192 http://cunit.sourceforge.net/ 00:07:47.192 00:07:47.192 00:07:47.192 Suite: accel_dif 00:07:47.192 Test: verify: DIF generated, GUARD check ...passed 00:07:47.192 Test: verify: DIF generated, APPTAG check ...passed 00:07:47.192 Test: verify: DIF generated, REFTAG check ...passed 00:07:47.192 Test: verify: DIF not generated, GUARD check ...passed 00:07:47.192 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 10:13:00.500526] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.192 [2024-07-26 10:13:00.500751] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.192 [2024-07-26 10:13:00.500790] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.192 passed 00:07:47.192 Test: verify: DIF not generated, REFTAG check ...passed 00:07:47.193 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:47.193 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:47.193 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-26 10:13:00.500820] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.193 [2024-07-26 10:13:00.500847] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.193 [2024-07-26 10:13:00.500937] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.193 [2024-07-26 10:13:00.501001] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:47.193 passed 00:07:47.193 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:47.193 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:47.193 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 10:13:00.501323] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:47.193 passed 00:07:47.193 Test: generate copy: DIF generated, GUARD check ...passed 00:07:47.193 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:47.193 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:47.193 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:47.193 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:47.193 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:47.193 Test: generate copy: iovecs-len validate ...passed 00:07:47.193 Test: generate copy: buffer alignment validate ...[2024-07-26 10:13:00.501877] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:47.193 passed 00:07:47.193 00:07:47.193 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.193 suites 1 1 n/a 0 0 00:07:47.193 tests 20 20 20 0 0 00:07:47.193 asserts 204 204 204 0 n/a 00:07:47.193 00:07:47.193 Elapsed time = 0.004 seconds 00:07:47.451 00:07:47.451 real 0m0.578s 00:07:47.451 user 0m0.783s 00:07:47.451 sys 0m0.151s 00:07:47.451 ************************************ 00:07:47.451 END TEST accel_dif_functional_tests 00:07:47.451 ************************************ 00:07:47.451 10:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.451 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.451 ************************************ 00:07:47.451 END TEST accel 00:07:47.451 ************************************ 00:07:47.451 00:07:47.451 real 1m2.500s 00:07:47.451 user 1m6.761s 00:07:47.451 sys 0m6.048s 00:07:47.451 10:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.451 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.451 10:13:00 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:47.451 10:13:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.451 10:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.451 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.451 ************************************ 00:07:47.451 START TEST accel_rpc 00:07:47.451 ************************************ 00:07:47.451 10:13:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:47.451 * Looking for test storage... 00:07:47.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:47.451 10:13:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.451 10:13:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68880 00:07:47.451 10:13:00 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:47.451 10:13:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 68880 00:07:47.451 10:13:00 -- common/autotest_common.sh@819 -- # '[' -z 68880 ']' 00:07:47.451 10:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.451 10:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.451 10:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.451 10:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.451 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.709 [2024-07-26 10:13:00.957525] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:47.709 [2024-07-26 10:13:00.957692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68880 ] 00:07:47.709 [2024-07-26 10:13:01.105260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.967 [2024-07-26 10:13:01.213848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.967 [2024-07-26 10:13:01.214307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.533 10:13:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:48.533 10:13:01 -- common/autotest_common.sh@852 -- # return 0 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:48.533 10:13:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.533 10:13:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.533 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.533 ************************************ 00:07:48.533 START TEST accel_assign_opcode 00:07:48.533 ************************************ 00:07:48.533 10:13:01 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:48.533 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.533 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.533 [2024-07-26 10:13:01.950887] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:48.533 10:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:48.533 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.533 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.533 [2024-07-26 10:13:01.958863] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:48.533 10:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.533 10:13:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:48.533 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.533 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.792 10:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.792 10:13:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:48.792 10:13:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:48.792 10:13:02 -- accel/accel_rpc.sh@42 -- # grep software 00:07:48.792 10:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.792 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.792 10:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.792 software 00:07:49.062 ************************************ 00:07:49.062 END TEST accel_assign_opcode 00:07:49.062 ************************************ 00:07:49.062 00:07:49.062 real 0m0.305s 00:07:49.062 user 0m0.055s 00:07:49.062 sys 0m0.010s 00:07:49.062 10:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.062 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.062 10:13:02 -- accel/accel_rpc.sh@55 -- # killprocess 68880 00:07:49.062 10:13:02 -- common/autotest_common.sh@926 -- # '[' -z 68880 ']' 00:07:49.062 10:13:02 -- common/autotest_common.sh@930 -- # kill -0 68880 00:07:49.062 10:13:02 -- common/autotest_common.sh@931 -- # uname 00:07:49.062 10:13:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:49.062 10:13:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68880 00:07:49.062 killing process with pid 68880 00:07:49.062 10:13:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:49.062 10:13:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:49.062 10:13:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68880' 00:07:49.062 10:13:02 -- common/autotest_common.sh@945 -- # kill 68880 00:07:49.062 10:13:02 -- common/autotest_common.sh@950 -- # wait 68880 00:07:49.332 ************************************ 00:07:49.332 END TEST accel_rpc 00:07:49.332 ************************************ 00:07:49.332 00:07:49.332 real 0m1.932s 00:07:49.332 user 0m2.035s 00:07:49.332 sys 0m0.461s 00:07:49.332 10:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.332 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.332 10:13:02 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.332 10:13:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.332 10:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.332 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.332 ************************************ 00:07:49.332 START TEST app_cmdline 00:07:49.332 ************************************ 00:07:49.332 10:13:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.590 * Looking for test storage... 00:07:49.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.590 10:13:02 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:49.590 10:13:02 -- app/cmdline.sh@17 -- # spdk_tgt_pid=68972 00:07:49.590 10:13:02 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:49.590 10:13:02 -- app/cmdline.sh@18 -- # waitforlisten 68972 00:07:49.590 10:13:02 -- common/autotest_common.sh@819 -- # '[' -z 68972 ']' 00:07:49.590 10:13:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.590 10:13:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.590 10:13:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.590 10:13:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.590 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.590 [2024-07-26 10:13:02.911194] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:49.590 [2024-07-26 10:13:02.911539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68972 ] 00:07:49.849 [2024-07-26 10:13:03.048440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.849 [2024-07-26 10:13:03.160018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.849 [2024-07-26 10:13:03.160259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.785 10:13:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.785 10:13:03 -- common/autotest_common.sh@852 -- # return 0 00:07:50.785 10:13:03 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:50.785 { 00:07:50.785 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:50.785 "fields": { 00:07:50.785 "major": 24, 00:07:50.785 "minor": 1, 00:07:50.785 "patch": 1, 00:07:50.785 "suffix": "-pre", 00:07:50.785 "commit": "dbef7efac" 00:07:50.785 } 00:07:50.785 } 00:07:50.785 10:13:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:50.785 10:13:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:50.785 10:13:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:50.785 10:13:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:50.785 10:13:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:50.785 10:13:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:50.785 10:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.785 10:13:04 -- app/cmdline.sh@26 -- # sort 00:07:50.785 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.785 10:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.785 10:13:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:50.785 10:13:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:50.785 10:13:04 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.785 10:13:04 -- common/autotest_common.sh@640 -- # local es=0 00:07:50.785 10:13:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.785 10:13:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.785 10:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:50.785 10:13:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.785 10:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:50.785 10:13:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.785 10:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:50.785 10:13:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.785 10:13:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:50.785 10:13:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:51.045 request: 00:07:51.045 { 00:07:51.045 "method": "env_dpdk_get_mem_stats", 00:07:51.045 "req_id": 1 00:07:51.045 } 00:07:51.045 Got JSON-RPC error response 00:07:51.045 response: 00:07:51.045 { 00:07:51.045 "code": -32601, 00:07:51.045 "message": "Method not found" 00:07:51.045 } 00:07:51.045 10:13:04 -- common/autotest_common.sh@643 -- # es=1 00:07:51.045 10:13:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:51.045 10:13:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:51.045 10:13:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:51.045 10:13:04 -- app/cmdline.sh@1 -- # killprocess 68972 00:07:51.045 10:13:04 -- common/autotest_common.sh@926 -- # '[' -z 68972 ']' 00:07:51.045 10:13:04 -- common/autotest_common.sh@930 -- # kill -0 68972 00:07:51.045 10:13:04 -- common/autotest_common.sh@931 -- # uname 00:07:51.045 10:13:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:51.045 10:13:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68972 00:07:51.045 killing process with pid 68972 00:07:51.045 10:13:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:51.045 10:13:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:51.045 10:13:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68972' 00:07:51.045 10:13:04 -- common/autotest_common.sh@945 -- # kill 68972 00:07:51.045 10:13:04 -- common/autotest_common.sh@950 -- # wait 68972 00:07:51.611 ************************************ 00:07:51.611 END TEST app_cmdline 00:07:51.611 ************************************ 00:07:51.611 00:07:51.611 real 0m2.100s 00:07:51.611 user 0m2.615s 00:07:51.611 sys 0m0.479s 00:07:51.611 10:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.611 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 10:13:04 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.611 10:13:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:51.611 10:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.611 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 ************************************ 00:07:51.611 START TEST version 00:07:51.611 ************************************ 00:07:51.611 10:13:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.611 * Looking for test storage... 00:07:51.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.611 10:13:05 -- app/version.sh@17 -- # get_header_version major 00:07:51.611 10:13:05 -- app/version.sh@14 -- # cut -f2 00:07:51.611 10:13:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.611 10:13:05 -- app/version.sh@14 -- # tr -d '"' 00:07:51.611 10:13:05 -- app/version.sh@17 -- # major=24 00:07:51.611 10:13:05 -- app/version.sh@18 -- # get_header_version minor 00:07:51.611 10:13:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.611 10:13:05 -- app/version.sh@14 -- # cut -f2 00:07:51.611 10:13:05 -- app/version.sh@14 -- # tr -d '"' 00:07:51.611 10:13:05 -- app/version.sh@18 -- # minor=1 00:07:51.611 10:13:05 -- app/version.sh@19 -- # get_header_version patch 00:07:51.611 10:13:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.611 10:13:05 -- app/version.sh@14 -- # cut -f2 00:07:51.611 10:13:05 -- app/version.sh@14 -- # tr -d '"' 00:07:51.611 10:13:05 -- app/version.sh@19 -- # patch=1 00:07:51.611 10:13:05 -- app/version.sh@20 -- # get_header_version suffix 00:07:51.611 10:13:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.611 10:13:05 -- app/version.sh@14 -- # cut -f2 00:07:51.611 10:13:05 -- app/version.sh@14 -- # tr -d '"' 00:07:51.611 10:13:05 -- app/version.sh@20 -- # suffix=-pre 00:07:51.611 10:13:05 -- app/version.sh@22 -- # version=24.1 00:07:51.611 10:13:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:51.611 10:13:05 -- app/version.sh@25 -- # version=24.1.1 00:07:51.611 10:13:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:51.611 10:13:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:51.611 10:13:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:51.869 10:13:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:51.869 10:13:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:51.869 00:07:51.869 real 0m0.149s 00:07:51.869 user 0m0.085s 00:07:51.869 sys 0m0.098s 00:07:51.869 ************************************ 00:07:51.869 END TEST version 00:07:51.869 ************************************ 00:07:51.869 10:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.869 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.869 10:13:05 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:51.869 10:13:05 -- spdk/autotest.sh@204 -- # uname -s 00:07:51.869 10:13:05 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:51.869 10:13:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:51.869 10:13:05 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:07:51.869 10:13:05 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:07:51.869 10:13:05 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:51.869 10:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:51.869 10:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.869 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.869 ************************************ 00:07:51.869 START TEST spdk_dd 00:07:51.869 ************************************ 00:07:51.869 10:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:51.869 * Looking for test storage... 00:07:51.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:51.869 10:13:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.869 10:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.869 10:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.869 10:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.869 10:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.869 10:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.870 10:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.870 10:13:05 -- paths/export.sh@5 -- # export PATH 00:07:51.870 10:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.870 10:13:05 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:52.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.129 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:52.129 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:52.390 10:13:05 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:52.390 10:13:05 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:52.390 10:13:05 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:52.390 10:13:05 -- scripts/common.sh@312 -- # local nvmes 00:07:52.390 10:13:05 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:52.390 10:13:05 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:52.390 10:13:05 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:52.390 10:13:05 -- scripts/common.sh@297 -- # local bdf= 00:07:52.390 10:13:05 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:52.390 10:13:05 -- scripts/common.sh@232 -- # local class 00:07:52.390 10:13:05 -- scripts/common.sh@233 -- # local subclass 00:07:52.390 10:13:05 -- scripts/common.sh@234 -- # local progif 00:07:52.390 10:13:05 -- scripts/common.sh@235 -- # printf %02x 1 00:07:52.390 10:13:05 -- scripts/common.sh@235 -- # class=01 00:07:52.390 10:13:05 -- scripts/common.sh@236 -- # printf %02x 8 00:07:52.390 10:13:05 -- scripts/common.sh@236 -- # subclass=08 00:07:52.390 10:13:05 -- scripts/common.sh@237 -- # printf %02x 2 00:07:52.390 10:13:05 -- scripts/common.sh@237 -- # progif=02 00:07:52.390 10:13:05 -- scripts/common.sh@239 -- # hash lspci 00:07:52.390 10:13:05 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:52.390 10:13:05 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:52.390 10:13:05 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:52.390 10:13:05 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:52.390 10:13:05 -- scripts/common.sh@244 -- # tr -d '"' 00:07:52.390 10:13:05 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:52.390 10:13:05 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:52.390 10:13:05 -- scripts/common.sh@15 -- # local i 00:07:52.390 10:13:05 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:52.390 10:13:05 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:52.390 10:13:05 -- scripts/common.sh@24 -- # return 0 00:07:52.390 10:13:05 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:52.390 10:13:05 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:52.390 10:13:05 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:52.390 10:13:05 -- scripts/common.sh@15 -- # local i 00:07:52.390 10:13:05 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:52.390 10:13:05 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:52.390 10:13:05 -- scripts/common.sh@24 -- # return 0 00:07:52.390 10:13:05 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:52.390 10:13:05 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:52.391 10:13:05 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:52.391 10:13:05 -- scripts/common.sh@322 -- # uname -s 00:07:52.391 10:13:05 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:52.391 10:13:05 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:52.391 10:13:05 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:52.391 10:13:05 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:52.391 10:13:05 -- scripts/common.sh@322 -- # uname -s 00:07:52.391 10:13:05 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:52.391 10:13:05 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:52.391 10:13:05 -- scripts/common.sh@327 -- # (( 2 )) 00:07:52.391 10:13:05 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:52.391 10:13:05 -- dd/dd.sh@13 -- # check_liburing 00:07:52.391 10:13:05 -- dd/common.sh@139 -- # local lib so 00:07:52.391 10:13:05 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:52.391 10:13:05 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.391 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:52.391 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:52.393 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.393 10:13:05 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:52.394 10:13:05 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:52.394 10:13:05 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:52.394 10:13:05 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:52.394 * spdk_dd linked to liburing 00:07:52.394 10:13:05 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:52.394 10:13:05 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:52.394 10:13:05 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:52.394 10:13:05 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:52.394 10:13:05 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:52.394 10:13:05 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:52.394 10:13:05 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:52.394 10:13:05 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:52.394 10:13:05 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:52.394 10:13:05 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:52.394 10:13:05 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:52.394 10:13:05 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:52.394 10:13:05 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:52.394 10:13:05 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:52.394 10:13:05 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:52.394 10:13:05 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:52.394 10:13:05 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:52.394 10:13:05 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:52.394 10:13:05 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:52.394 10:13:05 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:52.394 10:13:05 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:52.394 10:13:05 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:52.394 10:13:05 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:52.394 10:13:05 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:52.394 10:13:05 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:52.394 10:13:05 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:52.394 10:13:05 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:52.394 10:13:05 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:52.394 10:13:05 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:52.394 10:13:05 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:52.394 10:13:05 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:52.394 10:13:05 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:52.394 10:13:05 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:52.394 10:13:05 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:52.394 10:13:05 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:52.394 10:13:05 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:52.394 10:13:05 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:52.394 10:13:05 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:52.394 10:13:05 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:52.394 10:13:05 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:52.394 10:13:05 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:52.394 10:13:05 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:52.394 10:13:05 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:52.394 10:13:05 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:52.394 10:13:05 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:52.394 10:13:05 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:52.394 10:13:05 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:52.394 10:13:05 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:52.394 10:13:05 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:52.394 10:13:05 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:52.394 10:13:05 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:52.394 10:13:05 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:52.394 10:13:05 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:52.394 10:13:05 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:52.394 10:13:05 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:52.394 10:13:05 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:52.394 10:13:05 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:52.394 10:13:05 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:52.394 10:13:05 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:52.394 10:13:05 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:52.394 10:13:05 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:52.394 10:13:05 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:52.394 10:13:05 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:52.394 10:13:05 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:52.394 10:13:05 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:52.394 10:13:05 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:52.394 10:13:05 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:52.394 10:13:05 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:52.394 10:13:05 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:52.394 10:13:05 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:52.394 10:13:05 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:52.394 10:13:05 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:52.394 10:13:05 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:52.394 10:13:05 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:52.394 10:13:05 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:52.394 10:13:05 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:52.394 10:13:05 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:52.394 10:13:05 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:52.394 10:13:05 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:52.394 10:13:05 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:52.394 10:13:05 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:52.394 10:13:05 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:52.394 10:13:05 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:52.394 10:13:05 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:52.394 10:13:05 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:52.394 10:13:05 -- dd/common.sh@157 -- # return 0 00:07:52.394 10:13:05 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:52.394 10:13:05 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:52.394 10:13:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:52.394 10:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.394 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.394 ************************************ 00:07:52.394 START TEST spdk_dd_basic_rw 00:07:52.394 ************************************ 00:07:52.394 10:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:52.394 * Looking for test storage... 00:07:52.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:52.394 10:13:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.394 10:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.394 10:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.394 10:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.394 10:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.394 10:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.394 10:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.394 10:13:05 -- paths/export.sh@5 -- # export PATH 00:07:52.394 10:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.394 10:13:05 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:52.394 10:13:05 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:52.394 10:13:05 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:52.394 10:13:05 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:52.394 10:13:05 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:52.394 10:13:05 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:52.394 10:13:05 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:52.394 10:13:05 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.394 10:13:05 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.394 10:13:05 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:52.394 10:13:05 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:52.395 10:13:05 -- dd/common.sh@126 -- # mapfile -t id 00:07:52.395 10:13:05 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:52.656 10:13:05 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2082 Host Write Commands: 93 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:52.656 10:13:05 -- dd/common.sh@130 -- # lbaf=04 00:07:52.656 10:13:05 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2082 Host Write Commands: 93 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:52.656 10:13:05 -- dd/common.sh@132 -- # lbaf=4096 00:07:52.656 10:13:05 -- dd/common.sh@134 -- # echo 4096 00:07:52.656 10:13:05 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:52.656 10:13:05 -- dd/basic_rw.sh@96 -- # : 00:07:52.656 10:13:05 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:52.656 10:13:05 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:52.656 10:13:05 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.656 10:13:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:52.656 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.656 10:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.656 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.656 ************************************ 00:07:52.656 START TEST dd_bs_lt_native_bs 00:07:52.656 ************************************ 00:07:52.656 10:13:05 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:52.656 10:13:05 -- common/autotest_common.sh@640 -- # local es=0 00:07:52.657 10:13:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:52.657 10:13:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.657 10:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.657 10:13:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.657 10:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.657 10:13:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.657 10:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.657 10:13:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.657 10:13:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.657 10:13:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:52.657 { 00:07:52.657 "subsystems": [ 00:07:52.657 { 00:07:52.657 "subsystem": "bdev", 00:07:52.657 "config": [ 00:07:52.657 { 00:07:52.657 "params": { 00:07:52.657 "trtype": "pcie", 00:07:52.657 "traddr": "0000:00:06.0", 00:07:52.657 "name": "Nvme0" 00:07:52.657 }, 00:07:52.657 "method": "bdev_nvme_attach_controller" 00:07:52.657 }, 00:07:52.657 { 00:07:52.657 "method": "bdev_wait_for_examine" 00:07:52.657 } 00:07:52.657 ] 00:07:52.657 } 00:07:52.657 ] 00:07:52.657 } 00:07:52.657 [2024-07-26 10:13:06.044978] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:52.657 [2024-07-26 10:13:06.045083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69292 ] 00:07:52.916 [2024-07-26 10:13:06.184700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.916 [2024-07-26 10:13:06.294134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.175 [2024-07-26 10:13:06.457657] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:53.175 [2024-07-26 10:13:06.457743] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.175 [2024-07-26 10:13:06.581122] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:53.434 10:13:06 -- common/autotest_common.sh@643 -- # es=234 00:07:53.434 10:13:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:53.434 10:13:06 -- common/autotest_common.sh@652 -- # es=106 00:07:53.434 10:13:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:53.434 10:13:06 -- common/autotest_common.sh@660 -- # es=1 00:07:53.434 10:13:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:53.434 00:07:53.434 real 0m0.687s 00:07:53.434 user 0m0.463s 00:07:53.434 sys 0m0.173s 00:07:53.434 ************************************ 00:07:53.434 END TEST dd_bs_lt_native_bs 00:07:53.434 ************************************ 00:07:53.434 10:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.434 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.434 10:13:06 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:53.434 10:13:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:53.434 10:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.434 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.434 ************************************ 00:07:53.434 START TEST dd_rw 00:07:53.434 ************************************ 00:07:53.434 10:13:06 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:07:53.434 10:13:06 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:53.434 10:13:06 -- dd/basic_rw.sh@12 -- # local count size 00:07:53.434 10:13:06 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:53.434 10:13:06 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:53.434 10:13:06 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:53.434 10:13:06 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:53.434 10:13:06 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:53.434 10:13:06 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:53.434 10:13:06 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:53.434 10:13:06 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:53.434 10:13:06 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:53.434 10:13:06 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.434 10:13:06 -- dd/basic_rw.sh@23 -- # count=15 00:07:53.434 10:13:06 -- dd/basic_rw.sh@24 -- # count=15 00:07:53.434 10:13:06 -- dd/basic_rw.sh@25 -- # size=61440 00:07:53.434 10:13:06 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:53.434 10:13:06 -- dd/common.sh@98 -- # xtrace_disable 00:07:53.434 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 10:13:07 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:54.002 10:13:07 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:54.002 10:13:07 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.002 10:13:07 -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 [2024-07-26 10:13:07.424028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:54.002 [2024-07-26 10:13:07.424138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69323 ] 00:07:54.002 { 00:07:54.002 "subsystems": [ 00:07:54.002 { 00:07:54.002 "subsystem": "bdev", 00:07:54.002 "config": [ 00:07:54.002 { 00:07:54.002 "params": { 00:07:54.002 "trtype": "pcie", 00:07:54.002 "traddr": "0000:00:06.0", 00:07:54.002 "name": "Nvme0" 00:07:54.002 }, 00:07:54.002 "method": "bdev_nvme_attach_controller" 00:07:54.002 }, 00:07:54.002 { 00:07:54.002 "method": "bdev_wait_for_examine" 00:07:54.002 } 00:07:54.002 ] 00:07:54.002 } 00:07:54.002 ] 00:07:54.002 } 00:07:54.261 [2024-07-26 10:13:07.559436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.261 [2024-07-26 10:13:07.673867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.779  Copying: 60/60 [kB] (average 29 MBps) 00:07:54.779 00:07:54.780 10:13:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:54.780 10:13:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:54.780 10:13:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.780 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:07:54.780 [2024-07-26 10:13:08.106134] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:54.780 [2024-07-26 10:13:08.106248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69341 ] 00:07:54.780 { 00:07:54.780 "subsystems": [ 00:07:54.780 { 00:07:54.780 "subsystem": "bdev", 00:07:54.780 "config": [ 00:07:54.780 { 00:07:54.780 "params": { 00:07:54.780 "trtype": "pcie", 00:07:54.780 "traddr": "0000:00:06.0", 00:07:54.780 "name": "Nvme0" 00:07:54.780 }, 00:07:54.780 "method": "bdev_nvme_attach_controller" 00:07:54.780 }, 00:07:54.780 { 00:07:54.780 "method": "bdev_wait_for_examine" 00:07:54.780 } 00:07:54.780 ] 00:07:54.780 } 00:07:54.780 ] 00:07:54.780 } 00:07:55.054 [2024-07-26 10:13:08.237243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.054 [2024-07-26 10:13:08.348343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.576  Copying: 60/60 [kB] (average 29 MBps) 00:07:55.576 00:07:55.576 10:13:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.576 10:13:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:55.576 10:13:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:55.576 10:13:08 -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.576 10:13:08 -- dd/common.sh@12 -- # local size=61440 00:07:55.576 10:13:08 -- dd/common.sh@14 -- # local bs=1048576 00:07:55.576 10:13:08 -- dd/common.sh@15 -- # local count=1 00:07:55.576 10:13:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:55.576 10:13:08 -- dd/common.sh@18 -- # gen_conf 00:07:55.576 10:13:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:55.576 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:07:55.576 { 00:07:55.576 "subsystems": [ 00:07:55.576 { 00:07:55.576 "subsystem": "bdev", 00:07:55.577 "config": [ 00:07:55.577 { 00:07:55.577 "params": { 00:07:55.577 "trtype": "pcie", 00:07:55.577 "traddr": "0000:00:06.0", 00:07:55.577 "name": "Nvme0" 00:07:55.577 }, 00:07:55.577 "method": "bdev_nvme_attach_controller" 00:07:55.577 }, 00:07:55.577 { 00:07:55.577 "method": "bdev_wait_for_examine" 00:07:55.577 } 00:07:55.577 ] 00:07:55.577 } 00:07:55.577 ] 00:07:55.577 } 00:07:55.577 [2024-07-26 10:13:08.840457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:55.577 [2024-07-26 10:13:08.840558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69355 ] 00:07:55.577 [2024-07-26 10:13:08.977722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.836 [2024-07-26 10:13:09.086118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.095  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:56.095 00:07:56.095 10:13:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:56.095 10:13:09 -- dd/basic_rw.sh@23 -- # count=15 00:07:56.095 10:13:09 -- dd/basic_rw.sh@24 -- # count=15 00:07:56.095 10:13:09 -- dd/basic_rw.sh@25 -- # size=61440 00:07:56.095 10:13:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:56.095 10:13:09 -- dd/common.sh@98 -- # xtrace_disable 00:07:56.095 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 10:13:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:56.663 10:13:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:56.663 10:13:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.663 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.921 [2024-07-26 10:13:10.133754] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:56.921 [2024-07-26 10:13:10.134611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69373 ] 00:07:56.921 { 00:07:56.921 "subsystems": [ 00:07:56.921 { 00:07:56.921 "subsystem": "bdev", 00:07:56.921 "config": [ 00:07:56.921 { 00:07:56.921 "params": { 00:07:56.921 "trtype": "pcie", 00:07:56.921 "traddr": "0000:00:06.0", 00:07:56.921 "name": "Nvme0" 00:07:56.921 }, 00:07:56.921 "method": "bdev_nvme_attach_controller" 00:07:56.921 }, 00:07:56.921 { 00:07:56.921 "method": "bdev_wait_for_examine" 00:07:56.921 } 00:07:56.921 ] 00:07:56.921 } 00:07:56.921 ] 00:07:56.921 } 00:07:56.921 [2024-07-26 10:13:10.272255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.921 [2024-07-26 10:13:10.376720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.439  Copying: 60/60 [kB] (average 58 MBps) 00:07:57.439 00:07:57.439 10:13:10 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:57.439 10:13:10 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:57.439 10:13:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.439 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.439 [2024-07-26 10:13:10.821506] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:57.439 [2024-07-26 10:13:10.821669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:07:57.439 { 00:07:57.439 "subsystems": [ 00:07:57.439 { 00:07:57.439 "subsystem": "bdev", 00:07:57.439 "config": [ 00:07:57.439 { 00:07:57.439 "params": { 00:07:57.439 "trtype": "pcie", 00:07:57.439 "traddr": "0000:00:06.0", 00:07:57.439 "name": "Nvme0" 00:07:57.439 }, 00:07:57.439 "method": "bdev_nvme_attach_controller" 00:07:57.439 }, 00:07:57.439 { 00:07:57.439 "method": "bdev_wait_for_examine" 00:07:57.439 } 00:07:57.439 ] 00:07:57.439 } 00:07:57.439 ] 00:07:57.439 } 00:07:57.699 [2024-07-26 10:13:10.958102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.699 [2024-07-26 10:13:11.067125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.218  Copying: 60/60 [kB] (average 58 MBps) 00:07:58.218 00:07:58.218 10:13:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.218 10:13:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:58.218 10:13:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.218 10:13:11 -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.218 10:13:11 -- dd/common.sh@12 -- # local size=61440 00:07:58.218 10:13:11 -- dd/common.sh@14 -- # local bs=1048576 00:07:58.218 10:13:11 -- dd/common.sh@15 -- # local count=1 00:07:58.218 10:13:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.218 10:13:11 -- dd/common.sh@18 -- # gen_conf 00:07:58.218 10:13:11 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.218 10:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 [2024-07-26 10:13:11.549204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:58.218 [2024-07-26 10:13:11.549865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69404 ] 00:07:58.218 { 00:07:58.218 "subsystems": [ 00:07:58.218 { 00:07:58.218 "subsystem": "bdev", 00:07:58.218 "config": [ 00:07:58.218 { 00:07:58.218 "params": { 00:07:58.218 "trtype": "pcie", 00:07:58.218 "traddr": "0000:00:06.0", 00:07:58.218 "name": "Nvme0" 00:07:58.218 }, 00:07:58.218 "method": "bdev_nvme_attach_controller" 00:07:58.218 }, 00:07:58.218 { 00:07:58.218 "method": "bdev_wait_for_examine" 00:07:58.218 } 00:07:58.218 ] 00:07:58.218 } 00:07:58.218 ] 00:07:58.218 } 00:07:58.477 [2024-07-26 10:13:11.689792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.477 [2024-07-26 10:13:11.801215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.994  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:58.994 00:07:58.994 10:13:12 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:58.994 10:13:12 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:58.994 10:13:12 -- dd/basic_rw.sh@23 -- # count=7 00:07:58.994 10:13:12 -- dd/basic_rw.sh@24 -- # count=7 00:07:58.994 10:13:12 -- dd/basic_rw.sh@25 -- # size=57344 00:07:58.994 10:13:12 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:58.994 10:13:12 -- dd/common.sh@98 -- # xtrace_disable 00:07:58.994 10:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.562 10:13:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:59.562 10:13:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.562 10:13:12 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.562 10:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.562 [2024-07-26 10:13:12.814320] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:59.562 [2024-07-26 10:13:12.814425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69428 ] 00:07:59.562 { 00:07:59.562 "subsystems": [ 00:07:59.562 { 00:07:59.562 "subsystem": "bdev", 00:07:59.562 "config": [ 00:07:59.562 { 00:07:59.562 "params": { 00:07:59.562 "trtype": "pcie", 00:07:59.562 "traddr": "0000:00:06.0", 00:07:59.562 "name": "Nvme0" 00:07:59.562 }, 00:07:59.562 "method": "bdev_nvme_attach_controller" 00:07:59.562 }, 00:07:59.562 { 00:07:59.562 "method": "bdev_wait_for_examine" 00:07:59.562 } 00:07:59.562 ] 00:07:59.562 } 00:07:59.562 ] 00:07:59.562 } 00:07:59.562 [2024-07-26 10:13:12.948129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.821 [2024-07-26 10:13:13.057516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.080  Copying: 56/56 [kB] (average 27 MBps) 00:08:00.080 00:08:00.080 10:13:13 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:00.080 10:13:13 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.080 10:13:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.080 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:00.080 { 00:08:00.080 "subsystems": [ 00:08:00.080 { 00:08:00.080 "subsystem": "bdev", 00:08:00.080 "config": [ 00:08:00.080 { 00:08:00.080 "params": { 00:08:00.080 "trtype": "pcie", 00:08:00.080 "traddr": "0000:00:06.0", 00:08:00.080 "name": "Nvme0" 00:08:00.080 }, 00:08:00.080 "method": "bdev_nvme_attach_controller" 00:08:00.080 }, 00:08:00.080 { 00:08:00.080 "method": "bdev_wait_for_examine" 00:08:00.080 } 00:08:00.080 ] 00:08:00.080 } 00:08:00.080 ] 00:08:00.080 } 00:08:00.080 [2024-07-26 10:13:13.521755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:00.080 [2024-07-26 10:13:13.521884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:08:00.339 [2024-07-26 10:13:13.659417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.339 [2024-07-26 10:13:13.764872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.856  Copying: 56/56 [kB] (average 54 MBps) 00:08:00.856 00:08:00.856 10:13:14 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.856 10:13:14 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:00.856 10:13:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.856 10:13:14 -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.856 10:13:14 -- dd/common.sh@12 -- # local size=57344 00:08:00.856 10:13:14 -- dd/common.sh@14 -- # local bs=1048576 00:08:00.856 10:13:14 -- dd/common.sh@15 -- # local count=1 00:08:00.856 10:13:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.856 10:13:14 -- dd/common.sh@18 -- # gen_conf 00:08:00.856 10:13:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.856 10:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.856 [2024-07-26 10:13:14.238275] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:00.856 [2024-07-26 10:13:14.238424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69454 ] 00:08:00.856 { 00:08:00.856 "subsystems": [ 00:08:00.856 { 00:08:00.856 "subsystem": "bdev", 00:08:00.856 "config": [ 00:08:00.856 { 00:08:00.856 "params": { 00:08:00.856 "trtype": "pcie", 00:08:00.856 "traddr": "0000:00:06.0", 00:08:00.856 "name": "Nvme0" 00:08:00.856 }, 00:08:00.856 "method": "bdev_nvme_attach_controller" 00:08:00.856 }, 00:08:00.856 { 00:08:00.856 "method": "bdev_wait_for_examine" 00:08:00.856 } 00:08:00.856 ] 00:08:00.856 } 00:08:00.856 ] 00:08:00.856 } 00:08:01.115 [2024-07-26 10:13:14.375809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.115 [2024-07-26 10:13:14.491844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.632  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:01.632 00:08:01.632 10:13:14 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:01.632 10:13:14 -- dd/basic_rw.sh@23 -- # count=7 00:08:01.632 10:13:14 -- dd/basic_rw.sh@24 -- # count=7 00:08:01.632 10:13:14 -- dd/basic_rw.sh@25 -- # size=57344 00:08:01.632 10:13:14 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:01.632 10:13:14 -- dd/common.sh@98 -- # xtrace_disable 00:08:01.632 10:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:02.199 10:13:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:02.200 10:13:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:02.200 10:13:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.200 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.200 [2024-07-26 10:13:15.506291] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:02.200 [2024-07-26 10:13:15.506396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69477 ] 00:08:02.200 { 00:08:02.200 "subsystems": [ 00:08:02.200 { 00:08:02.200 "subsystem": "bdev", 00:08:02.200 "config": [ 00:08:02.200 { 00:08:02.200 "params": { 00:08:02.200 "trtype": "pcie", 00:08:02.200 "traddr": "0000:00:06.0", 00:08:02.200 "name": "Nvme0" 00:08:02.200 }, 00:08:02.200 "method": "bdev_nvme_attach_controller" 00:08:02.200 }, 00:08:02.200 { 00:08:02.200 "method": "bdev_wait_for_examine" 00:08:02.200 } 00:08:02.200 ] 00:08:02.200 } 00:08:02.200 ] 00:08:02.200 } 00:08:02.200 [2024-07-26 10:13:15.639073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.458 [2024-07-26 10:13:15.756197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.026  Copying: 56/56 [kB] (average 54 MBps) 00:08:03.026 00:08:03.026 10:13:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:03.026 10:13:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.026 10:13:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.026 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:08:03.026 [2024-07-26 10:13:16.233779] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:03.026 [2024-07-26 10:13:16.233895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69490 ] 00:08:03.026 { 00:08:03.026 "subsystems": [ 00:08:03.026 { 00:08:03.026 "subsystem": "bdev", 00:08:03.026 "config": [ 00:08:03.026 { 00:08:03.026 "params": { 00:08:03.026 "trtype": "pcie", 00:08:03.026 "traddr": "0000:00:06.0", 00:08:03.026 "name": "Nvme0" 00:08:03.026 }, 00:08:03.026 "method": "bdev_nvme_attach_controller" 00:08:03.026 }, 00:08:03.026 { 00:08:03.026 "method": "bdev_wait_for_examine" 00:08:03.026 } 00:08:03.026 ] 00:08:03.026 } 00:08:03.026 ] 00:08:03.026 } 00:08:03.026 [2024-07-26 10:13:16.366678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.026 [2024-07-26 10:13:16.481129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.543  Copying: 56/56 [kB] (average 54 MBps) 00:08:03.543 00:08:03.543 10:13:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.543 10:13:16 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:03.543 10:13:16 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.543 10:13:16 -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.543 10:13:16 -- dd/common.sh@12 -- # local size=57344 00:08:03.543 10:13:16 -- dd/common.sh@14 -- # local bs=1048576 00:08:03.543 10:13:16 -- dd/common.sh@15 -- # local count=1 00:08:03.543 10:13:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.543 10:13:16 -- dd/common.sh@18 -- # gen_conf 00:08:03.543 10:13:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.543 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:08:03.543 [2024-07-26 10:13:16.954382] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:03.543 [2024-07-26 10:13:16.954499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69509 ] 00:08:03.543 { 00:08:03.543 "subsystems": [ 00:08:03.543 { 00:08:03.543 "subsystem": "bdev", 00:08:03.543 "config": [ 00:08:03.543 { 00:08:03.543 "params": { 00:08:03.543 "trtype": "pcie", 00:08:03.543 "traddr": "0000:00:06.0", 00:08:03.543 "name": "Nvme0" 00:08:03.543 }, 00:08:03.543 "method": "bdev_nvme_attach_controller" 00:08:03.543 }, 00:08:03.543 { 00:08:03.543 "method": "bdev_wait_for_examine" 00:08:03.543 } 00:08:03.543 ] 00:08:03.543 } 00:08:03.543 ] 00:08:03.543 } 00:08:03.802 [2024-07-26 10:13:17.087182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.802 [2024-07-26 10:13:17.197016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.318  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.318 00:08:04.318 10:13:17 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:04.318 10:13:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.318 10:13:17 -- dd/basic_rw.sh@23 -- # count=3 00:08:04.318 10:13:17 -- dd/basic_rw.sh@24 -- # count=3 00:08:04.318 10:13:17 -- dd/basic_rw.sh@25 -- # size=49152 00:08:04.318 10:13:17 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:04.318 10:13:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:04.318 10:13:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.884 10:13:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:04.884 10:13:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.884 10:13:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.884 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.884 [2024-07-26 10:13:18.157353] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:04.884 [2024-07-26 10:13:18.157470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69527 ] 00:08:04.884 { 00:08:04.884 "subsystems": [ 00:08:04.884 { 00:08:04.884 "subsystem": "bdev", 00:08:04.884 "config": [ 00:08:04.884 { 00:08:04.884 "params": { 00:08:04.884 "trtype": "pcie", 00:08:04.884 "traddr": "0000:00:06.0", 00:08:04.884 "name": "Nvme0" 00:08:04.884 }, 00:08:04.884 "method": "bdev_nvme_attach_controller" 00:08:04.884 }, 00:08:04.884 { 00:08:04.884 "method": "bdev_wait_for_examine" 00:08:04.884 } 00:08:04.884 ] 00:08:04.884 } 00:08:04.884 ] 00:08:04.884 } 00:08:04.884 [2024-07-26 10:13:18.294233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.143 [2024-07-26 10:13:18.405551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.401  Copying: 48/48 [kB] (average 46 MBps) 00:08:05.401 00:08:05.401 10:13:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.401 10:13:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:05.401 10:13:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.401 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:05.721 [2024-07-26 10:13:18.856837] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:05.722 [2024-07-26 10:13:18.856949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69545 ] 00:08:05.722 { 00:08:05.722 "subsystems": [ 00:08:05.722 { 00:08:05.722 "subsystem": "bdev", 00:08:05.722 "config": [ 00:08:05.722 { 00:08:05.722 "params": { 00:08:05.722 "trtype": "pcie", 00:08:05.722 "traddr": "0000:00:06.0", 00:08:05.722 "name": "Nvme0" 00:08:05.722 }, 00:08:05.722 "method": "bdev_nvme_attach_controller" 00:08:05.722 }, 00:08:05.722 { 00:08:05.722 "method": "bdev_wait_for_examine" 00:08:05.722 } 00:08:05.722 ] 00:08:05.722 } 00:08:05.722 ] 00:08:05.722 } 00:08:05.722 [2024-07-26 10:13:18.996895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.722 [2024-07-26 10:13:19.121313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.238  Copying: 48/48 [kB] (average 46 MBps) 00:08:06.238 00:08:06.238 10:13:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.238 10:13:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:06.238 10:13:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.238 10:13:19 -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.238 10:13:19 -- dd/common.sh@12 -- # local size=49152 00:08:06.238 10:13:19 -- dd/common.sh@14 -- # local bs=1048576 00:08:06.238 10:13:19 -- dd/common.sh@15 -- # local count=1 00:08:06.238 10:13:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:06.238 10:13:19 -- dd/common.sh@18 -- # gen_conf 00:08:06.238 10:13:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.238 10:13:19 -- common/autotest_common.sh@10 -- # set +x 00:08:06.238 [2024-07-26 10:13:19.602070] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:06.238 [2024-07-26 10:13:19.602205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69553 ] 00:08:06.238 { 00:08:06.238 "subsystems": [ 00:08:06.238 { 00:08:06.238 "subsystem": "bdev", 00:08:06.238 "config": [ 00:08:06.238 { 00:08:06.238 "params": { 00:08:06.238 "trtype": "pcie", 00:08:06.238 "traddr": "0000:00:06.0", 00:08:06.238 "name": "Nvme0" 00:08:06.238 }, 00:08:06.238 "method": "bdev_nvme_attach_controller" 00:08:06.238 }, 00:08:06.238 { 00:08:06.238 "method": "bdev_wait_for_examine" 00:08:06.238 } 00:08:06.238 ] 00:08:06.238 } 00:08:06.238 ] 00:08:06.238 } 00:08:06.496 [2024-07-26 10:13:19.742450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.496 [2024-07-26 10:13:19.851446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.012  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.012 00:08:07.012 10:13:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:07.012 10:13:20 -- dd/basic_rw.sh@23 -- # count=3 00:08:07.012 10:13:20 -- dd/basic_rw.sh@24 -- # count=3 00:08:07.012 10:13:20 -- dd/basic_rw.sh@25 -- # size=49152 00:08:07.012 10:13:20 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:07.012 10:13:20 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.012 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 10:13:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:07.577 10:13:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:07.577 10:13:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.577 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 [2024-07-26 10:13:20.847534] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:07.577 [2024-07-26 10:13:20.847665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69582 ] 00:08:07.577 { 00:08:07.577 "subsystems": [ 00:08:07.577 { 00:08:07.577 "subsystem": "bdev", 00:08:07.577 "config": [ 00:08:07.577 { 00:08:07.577 "params": { 00:08:07.577 "trtype": "pcie", 00:08:07.577 "traddr": "0000:00:06.0", 00:08:07.577 "name": "Nvme0" 00:08:07.577 }, 00:08:07.577 "method": "bdev_nvme_attach_controller" 00:08:07.577 }, 00:08:07.578 { 00:08:07.578 "method": "bdev_wait_for_examine" 00:08:07.578 } 00:08:07.578 ] 00:08:07.578 } 00:08:07.578 ] 00:08:07.578 } 00:08:07.578 [2024-07-26 10:13:20.980796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.835 [2024-07-26 10:13:21.094809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.094  Copying: 48/48 [kB] (average 46 MBps) 00:08:08.094 00:08:08.094 10:13:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:08.094 10:13:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:08.094 10:13:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.094 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:08:08.094 [2024-07-26 10:13:21.543895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:08.094 [2024-07-26 10:13:21.544000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69589 ] 00:08:08.352 { 00:08:08.352 "subsystems": [ 00:08:08.352 { 00:08:08.352 "subsystem": "bdev", 00:08:08.352 "config": [ 00:08:08.352 { 00:08:08.352 "params": { 00:08:08.352 "trtype": "pcie", 00:08:08.352 "traddr": "0000:00:06.0", 00:08:08.352 "name": "Nvme0" 00:08:08.352 }, 00:08:08.352 "method": "bdev_nvme_attach_controller" 00:08:08.352 }, 00:08:08.352 { 00:08:08.352 "method": "bdev_wait_for_examine" 00:08:08.352 } 00:08:08.352 ] 00:08:08.352 } 00:08:08.352 ] 00:08:08.352 } 00:08:08.352 [2024-07-26 10:13:21.680604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.352 [2024-07-26 10:13:21.785789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.868  Copying: 48/48 [kB] (average 46 MBps) 00:08:08.868 00:08:08.868 10:13:22 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.868 10:13:22 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:08.868 10:13:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:08.868 10:13:22 -- dd/common.sh@11 -- # local nvme_ref= 00:08:08.868 10:13:22 -- dd/common.sh@12 -- # local size=49152 00:08:08.868 10:13:22 -- dd/common.sh@14 -- # local bs=1048576 00:08:08.868 10:13:22 -- dd/common.sh@15 -- # local count=1 00:08:08.868 10:13:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:08.868 10:13:22 -- dd/common.sh@18 -- # gen_conf 00:08:08.868 10:13:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.868 10:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:08.868 { 00:08:08.868 "subsystems": [ 00:08:08.868 { 00:08:08.868 "subsystem": "bdev", 00:08:08.868 "config": [ 00:08:08.868 { 00:08:08.868 "params": { 00:08:08.868 "trtype": "pcie", 00:08:08.868 "traddr": "0000:00:06.0", 00:08:08.868 "name": "Nvme0" 00:08:08.868 }, 00:08:08.868 "method": "bdev_nvme_attach_controller" 00:08:08.868 }, 00:08:08.868 { 00:08:08.868 "method": "bdev_wait_for_examine" 00:08:08.868 } 00:08:08.868 ] 00:08:08.868 } 00:08:08.868 ] 00:08:08.868 } 00:08:08.868 [2024-07-26 10:13:22.267360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:08.868 [2024-07-26 10:13:22.267465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69608 ] 00:08:09.126 [2024-07-26 10:13:22.405496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.126 [2024-07-26 10:13:22.512782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.642  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.642 00:08:09.642 00:08:09.642 real 0m16.178s 00:08:09.642 user 0m11.827s 00:08:09.642 sys 0m3.244s 00:08:09.642 10:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.642 10:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.642 ************************************ 00:08:09.642 END TEST dd_rw 00:08:09.642 ************************************ 00:08:09.642 10:13:22 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:09.642 10:13:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.642 10:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.642 10:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.642 ************************************ 00:08:09.642 START TEST dd_rw_offset 00:08:09.642 ************************************ 00:08:09.642 10:13:22 -- common/autotest_common.sh@1104 -- # basic_offset 00:08:09.642 10:13:22 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:09.642 10:13:22 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:09.642 10:13:22 -- dd/common.sh@98 -- # xtrace_disable 00:08:09.642 10:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.642 10:13:23 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:09.643 10:13:23 -- dd/basic_rw.sh@56 -- # data=tqfuwvbb5etynbk5y9ljo5d4f2cgavog9vs3cxa7msdn1ns79mgtewwxl0acgovu28p8mwn743uck3bases3oxkpv7y6ywb7aa91op4uuevlg9zrsooque42k7qe1yjbdx9t9icddg3wzv68tjdogllodyve33h7j65w8d4cf6z8i9icfh6lrudvqgy18qidhdpveu82ynb31vjo6vvebe07byr21qudl4c1p9rnyzps5cltrdbz6i1yr49dpv548i6cjviapvcabm7qzjkvh62daa7svl57l0lfdnsf5i8r7b63gjsrr65ai30p84zn2clk1npzu1zmdbg4mqmsgc4zc6e196ilho0rarp2y0v48g0y9ruq4v6kved3ru4raopskilpwkdqakhc2mt1c288ces5n0pntxhzhaafjag7hf6zglfbo9au63ylh3enbfqgd08bjxov2avrwgj6bmtgtfajuftgh6dpagwjs8uuqiswy0zob21jtjfzf1i2m97y1omzre0d0dt275u5getq8kmra1mtu9kgcu81axd3lj62dyw4rjlu4vxbghxwwzqmia7hdpowpitej0dv2neb2tbw5ytc383vwapjunus7yqbn62lxsunfcizbbwspt7o6ab6rc0ujkzefyk05976z6e84a6nvqz6qmzrc1f8f3pgx3bra6l3cgoivgxmm7pb6ridpy6di2ib3vfut88hylg197ydlinzib8f8neldkchv2r94t4g5hcgly06a1027e1yahwz2lby1kd3c8txsam17y6j25eze7gc9uxgex5kgp3m4bnr6oqqv58s8tn3mwyzqa94atjudf3yms69sb7ljjo4w99slh3f6vyulj9nkqncm1a7kb7egbd8ug0v08ckxobi03qmg3ocabblj2bc3dxpk79bz03tu6u67d65uua164tqhffra6kercprwz0omrbi8jet9ts49jhs0gxdibn6gjpr7im3gwu1glc8tzgntz1djcq63pbopcbczx8hwksffuteziu6njly53e2rk15rto3hoibprln1815hh9n5x0cqkp6kllf67l1a8pnfjkuekomqoiwryhltfsthkjdy0t27nz6jk9plqrwy4d9xvjvkoy5a53di7njyvsu8shocrw8wf67azc2l9yhn6ea7hejdic8902c12uy2x5pun3wknaboi7hpzsly7yroirrk67j0tk3vrn6dx5gbf0vqb1egr21vbupqncq90tgfhmz2f1dn1ws5u9o30cnlt7ydm0wesmmh78xq76e1odt417fprmfiz0s92lvuo7suwpfqcmizd577sey5upahdqfkl5ciwbw2w7t4eoxoypg88khs55rb9ksjcb7nesfev6xije0ddecnx6boxf14vgpzxl0z91klfy2kcrb4dut7l5osn7gxa50hq0cpjfngjptikaldj100unlxukmvvkvn1ndszvfka9esx7id5zwdfa87xx63by4gy6h1mkjzjdbt6p2ish00tta8k2m8s15oykn1h1lqgli8mfov0x2ycl64qxqztwersibvhja2mtptupv372gvrt6pbe9f94ws8upoceqhxty72mmvpml907pljzrya8ys4xs3luxo9zl7jyvso1zzr78qqbl2b48rfpf69d514g43slqvx8w4hg5zbg4p7yxp8i7m908nsj960xgxtucrswrmugrfqaeqpvhgwr094x9skjuj2k3tedagjgwgvq57jm28hz831rdo9nn8x83q5q76jarv8tv48qva8g943gjw3v0azfzqalud9jnu7x1e5x24i990re36a87vyes8ust8f8q2er0indkc8sn2ssl9oaflfdodu5sc2u71q5y7s10a25v1o0rkehcvxqajfxx9wcqu17r1uwxq0drgc2u6b0d0lmfkm8ogl9f01t4q9r1xtio1bn5hkmep308b26q6t5ogsd64sp0qmcrz5t83q7t9zsk6vhpda3njhhbptn2aa4aea0h0s3sobrgpr6wngrws7rc8m1ayp7jnjr1ozuwni4zzbocthsmcvrgb9ym3uby1jevnrwxdv8devucwp0u9hvs4db4gyicx6d71cg3lfpwf3jn4x4ha0n4jmcnxso5e9vmw3gqgokhmcqji84kl3pte9xqb7k32pzkppxcxwojktx4ndozkejromvmxk4mjfxsklzlavvwxax3xhboalk4zr1jvt4t6xu2h7x9wmybw145johl566yet3smiotqvfanwbvc52l0d3gt26itvexds94gzpc21sstz1od3o710dqbtrb94p9i6lq7cr979ndxxcvz101kytet84c6bngc6dy6mpi1rur3fzy1iw6d9byushkzh9f1zwu6idfq751k2nvcb4juf5n1juasz0mggwxnh1uknk9759ig3e8t6kj3066ac3w5zxj26xfs5rsmjxsjup66gq735incd8px2iqekgi7uz57blwu34821s4x5qgkqx23oz6z1as28kez05q21iejedq72jxzj2nec0qqgdc3dnzk1x6aosy8ikj4cibb62i93l4hi6r22lird50oigapsfjgzh9bbw5gxtd1w6n9cwlzg1lhzqa00qh1i6424wvzju77ikwxmkrzdpw91gn8xyc9q1pu1j3dmh0jixxc6uqanrenff44vva4zavbg3pm3hbqwupz7oykmj6mbzcjbaejzy4fu669kfrm0gcy9lx6qrtd71x70ebc4o2pmyz6150ik2o3nr0m8zb402g84z8m07jj750ceo28p55k0sumgp7rmrn9hxrfwo569guoxyx34ul01q6fq1g0igln7u7yhwno321w1446zb2c3vigbaqgeno2hks3kecl5iinha5s33cr9fetgpc5gczleg1ma9vggsgg4w1aylaqsa5fc2wupk3ufnbrc35xdy7u4ofgm2lpx3zmo4ljc8kqvke05ro3oi0cjierdlosiqmyy8ig5kaulpf5637904mfy5stko983civq59wix8shls1sdcu3bpxyyxppqvt9e7asvxrguojhyfvdmw3014vwuvvn0ilxg1irfv0b2wv5o9zyod4fq25welti6grsa7ujfycsgv1kpe57kvqo24aj5tfftvn6bdzob91c2cal2jqtg1xsjnwm0786tm93a7imswbbv8umt6rhiru5ty26gm8ow0foooxlub8u6p6nkc7udpygkgk5nmrfgwqag23tspuy2e84vo78h8pd112mkxk6fni1wv9lczsnnabmvwxpifpbbxukt24pm5r9yqr4vl2viiovtzuo0dyh37nknsc3n29gf1ahthohdq68zr42e8toaj4p10kuflt3qnz4rv0yo3mwsngkk3xzbhsj9kcgklwozsy6myr34u5zkeo7se1pypkrot2vv420hilq5p48xm2ob2ew6s230snp2b21ksxrql57ixi9ls1tsbs6liuiwxee14uqwtqrfwnv7xfuv1tx99mty0j1xipe2jc93ftkvs6ewqkysuo0ldfj626q7g1m8q4maprch1ja7cvqshb99w2azhlwwn7v3k36kxntw3smrxs4xnop7isht99v8mdlvdbyifx14o3j11sd95qgyil33dhvf3zqpm4un64epfl3hjfrb1ldxt2r1ag1gs1u8aujs94jlpv14xuw9m5c6a3ttgkpidd2c4wk6e7c05zpmhxwu43vverm9g3dt53sgwrdk12rg2484jn083vafpxqxauzi1bcgb0o7skleloqf25lvriu6l9mddmdzzdtnhht78e4nb794zncnxwb142isx0h5s65uhakbtmr2szyv83j229sk1scd8qyn3hjghxn0xogflw8epyd0twrkhgb6bbpdcdym9liv5vifbbyc4kehe9p032km3m0tazawrvx6qvi99cpvbjdmmzm4nmh97il5i28kcbubzl8ha648dlo8tbpxb0y86vfmvi9s7t3w8r2gr8oc85p4ht9vjin8hxq6agvwai3msk1rqx9vtuzik 00:08:09.643 10:13:23 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:09.643 10:13:23 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:09.643 10:13:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.643 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:08:09.643 { 00:08:09.643 "subsystems": [ 00:08:09.643 { 00:08:09.643 "subsystem": "bdev", 00:08:09.643 "config": [ 00:08:09.643 { 00:08:09.643 "params": { 00:08:09.643 "trtype": "pcie", 00:08:09.643 "traddr": "0000:00:06.0", 00:08:09.643 "name": "Nvme0" 00:08:09.643 }, 00:08:09.643 "method": "bdev_nvme_attach_controller" 00:08:09.643 }, 00:08:09.643 { 00:08:09.643 "method": "bdev_wait_for_examine" 00:08:09.643 } 00:08:09.643 ] 00:08:09.643 } 00:08:09.643 ] 00:08:09.643 } 00:08:09.643 [2024-07-26 10:13:23.069185] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:09.643 [2024-07-26 10:13:23.069332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69643 ] 00:08:09.901 [2024-07-26 10:13:23.214131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.901 [2024-07-26 10:13:23.319760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.423  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:10.423 00:08:10.423 10:13:23 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:10.423 10:13:23 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:10.423 10:13:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.423 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:08:10.423 [2024-07-26 10:13:23.777491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:10.423 [2024-07-26 10:13:23.777657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69650 ] 00:08:10.423 { 00:08:10.423 "subsystems": [ 00:08:10.423 { 00:08:10.423 "subsystem": "bdev", 00:08:10.423 "config": [ 00:08:10.423 { 00:08:10.423 "params": { 00:08:10.423 "trtype": "pcie", 00:08:10.423 "traddr": "0000:00:06.0", 00:08:10.423 "name": "Nvme0" 00:08:10.423 }, 00:08:10.423 "method": "bdev_nvme_attach_controller" 00:08:10.423 }, 00:08:10.423 { 00:08:10.423 "method": "bdev_wait_for_examine" 00:08:10.423 } 00:08:10.423 ] 00:08:10.423 } 00:08:10.423 ] 00:08:10.423 } 00:08:10.681 [2024-07-26 10:13:23.915355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.681 [2024-07-26 10:13:24.020069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.198  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:11.198 00:08:11.198 10:13:24 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:11.199 10:13:24 -- dd/basic_rw.sh@72 -- # [[ tqfuwvbb5etynbk5y9ljo5d4f2cgavog9vs3cxa7msdn1ns79mgtewwxl0acgovu28p8mwn743uck3bases3oxkpv7y6ywb7aa91op4uuevlg9zrsooque42k7qe1yjbdx9t9icddg3wzv68tjdogllodyve33h7j65w8d4cf6z8i9icfh6lrudvqgy18qidhdpveu82ynb31vjo6vvebe07byr21qudl4c1p9rnyzps5cltrdbz6i1yr49dpv548i6cjviapvcabm7qzjkvh62daa7svl57l0lfdnsf5i8r7b63gjsrr65ai30p84zn2clk1npzu1zmdbg4mqmsgc4zc6e196ilho0rarp2y0v48g0y9ruq4v6kved3ru4raopskilpwkdqakhc2mt1c288ces5n0pntxhzhaafjag7hf6zglfbo9au63ylh3enbfqgd08bjxov2avrwgj6bmtgtfajuftgh6dpagwjs8uuqiswy0zob21jtjfzf1i2m97y1omzre0d0dt275u5getq8kmra1mtu9kgcu81axd3lj62dyw4rjlu4vxbghxwwzqmia7hdpowpitej0dv2neb2tbw5ytc383vwapjunus7yqbn62lxsunfcizbbwspt7o6ab6rc0ujkzefyk05976z6e84a6nvqz6qmzrc1f8f3pgx3bra6l3cgoivgxmm7pb6ridpy6di2ib3vfut88hylg197ydlinzib8f8neldkchv2r94t4g5hcgly06a1027e1yahwz2lby1kd3c8txsam17y6j25eze7gc9uxgex5kgp3m4bnr6oqqv58s8tn3mwyzqa94atjudf3yms69sb7ljjo4w99slh3f6vyulj9nkqncm1a7kb7egbd8ug0v08ckxobi03qmg3ocabblj2bc3dxpk79bz03tu6u67d65uua164tqhffra6kercprwz0omrbi8jet9ts49jhs0gxdibn6gjpr7im3gwu1glc8tzgntz1djcq63pbopcbczx8hwksffuteziu6njly53e2rk15rto3hoibprln1815hh9n5x0cqkp6kllf67l1a8pnfjkuekomqoiwryhltfsthkjdy0t27nz6jk9plqrwy4d9xvjvkoy5a53di7njyvsu8shocrw8wf67azc2l9yhn6ea7hejdic8902c12uy2x5pun3wknaboi7hpzsly7yroirrk67j0tk3vrn6dx5gbf0vqb1egr21vbupqncq90tgfhmz2f1dn1ws5u9o30cnlt7ydm0wesmmh78xq76e1odt417fprmfiz0s92lvuo7suwpfqcmizd577sey5upahdqfkl5ciwbw2w7t4eoxoypg88khs55rb9ksjcb7nesfev6xije0ddecnx6boxf14vgpzxl0z91klfy2kcrb4dut7l5osn7gxa50hq0cpjfngjptikaldj100unlxukmvvkvn1ndszvfka9esx7id5zwdfa87xx63by4gy6h1mkjzjdbt6p2ish00tta8k2m8s15oykn1h1lqgli8mfov0x2ycl64qxqztwersibvhja2mtptupv372gvrt6pbe9f94ws8upoceqhxty72mmvpml907pljzrya8ys4xs3luxo9zl7jyvso1zzr78qqbl2b48rfpf69d514g43slqvx8w4hg5zbg4p7yxp8i7m908nsj960xgxtucrswrmugrfqaeqpvhgwr094x9skjuj2k3tedagjgwgvq57jm28hz831rdo9nn8x83q5q76jarv8tv48qva8g943gjw3v0azfzqalud9jnu7x1e5x24i990re36a87vyes8ust8f8q2er0indkc8sn2ssl9oaflfdodu5sc2u71q5y7s10a25v1o0rkehcvxqajfxx9wcqu17r1uwxq0drgc2u6b0d0lmfkm8ogl9f01t4q9r1xtio1bn5hkmep308b26q6t5ogsd64sp0qmcrz5t83q7t9zsk6vhpda3njhhbptn2aa4aea0h0s3sobrgpr6wngrws7rc8m1ayp7jnjr1ozuwni4zzbocthsmcvrgb9ym3uby1jevnrwxdv8devucwp0u9hvs4db4gyicx6d71cg3lfpwf3jn4x4ha0n4jmcnxso5e9vmw3gqgokhmcqji84kl3pte9xqb7k32pzkppxcxwojktx4ndozkejromvmxk4mjfxsklzlavvwxax3xhboalk4zr1jvt4t6xu2h7x9wmybw145johl566yet3smiotqvfanwbvc52l0d3gt26itvexds94gzpc21sstz1od3o710dqbtrb94p9i6lq7cr979ndxxcvz101kytet84c6bngc6dy6mpi1rur3fzy1iw6d9byushkzh9f1zwu6idfq751k2nvcb4juf5n1juasz0mggwxnh1uknk9759ig3e8t6kj3066ac3w5zxj26xfs5rsmjxsjup66gq735incd8px2iqekgi7uz57blwu34821s4x5qgkqx23oz6z1as28kez05q21iejedq72jxzj2nec0qqgdc3dnzk1x6aosy8ikj4cibb62i93l4hi6r22lird50oigapsfjgzh9bbw5gxtd1w6n9cwlzg1lhzqa00qh1i6424wvzju77ikwxmkrzdpw91gn8xyc9q1pu1j3dmh0jixxc6uqanrenff44vva4zavbg3pm3hbqwupz7oykmj6mbzcjbaejzy4fu669kfrm0gcy9lx6qrtd71x70ebc4o2pmyz6150ik2o3nr0m8zb402g84z8m07jj750ceo28p55k0sumgp7rmrn9hxrfwo569guoxyx34ul01q6fq1g0igln7u7yhwno321w1446zb2c3vigbaqgeno2hks3kecl5iinha5s33cr9fetgpc5gczleg1ma9vggsgg4w1aylaqsa5fc2wupk3ufnbrc35xdy7u4ofgm2lpx3zmo4ljc8kqvke05ro3oi0cjierdlosiqmyy8ig5kaulpf5637904mfy5stko983civq59wix8shls1sdcu3bpxyyxppqvt9e7asvxrguojhyfvdmw3014vwuvvn0ilxg1irfv0b2wv5o9zyod4fq25welti6grsa7ujfycsgv1kpe57kvqo24aj5tfftvn6bdzob91c2cal2jqtg1xsjnwm0786tm93a7imswbbv8umt6rhiru5ty26gm8ow0foooxlub8u6p6nkc7udpygkgk5nmrfgwqag23tspuy2e84vo78h8pd112mkxk6fni1wv9lczsnnabmvwxpifpbbxukt24pm5r9yqr4vl2viiovtzuo0dyh37nknsc3n29gf1ahthohdq68zr42e8toaj4p10kuflt3qnz4rv0yo3mwsngkk3xzbhsj9kcgklwozsy6myr34u5zkeo7se1pypkrot2vv420hilq5p48xm2ob2ew6s230snp2b21ksxrql57ixi9ls1tsbs6liuiwxee14uqwtqrfwnv7xfuv1tx99mty0j1xipe2jc93ftkvs6ewqkysuo0ldfj626q7g1m8q4maprch1ja7cvqshb99w2azhlwwn7v3k36kxntw3smrxs4xnop7isht99v8mdlvdbyifx14o3j11sd95qgyil33dhvf3zqpm4un64epfl3hjfrb1ldxt2r1ag1gs1u8aujs94jlpv14xuw9m5c6a3ttgkpidd2c4wk6e7c05zpmhxwu43vverm9g3dt53sgwrdk12rg2484jn083vafpxqxauzi1bcgb0o7skleloqf25lvriu6l9mddmdzzdtnhht78e4nb794zncnxwb142isx0h5s65uhakbtmr2szyv83j229sk1scd8qyn3hjghxn0xogflw8epyd0twrkhgb6bbpdcdym9liv5vifbbyc4kehe9p032km3m0tazawrvx6qvi99cpvbjdmmzm4nmh97il5i28kcbubzl8ha648dlo8tbpxb0y86vfmvi9s7t3w8r2gr8oc85p4ht9vjin8hxq6agvwai3msk1rqx9vtuzik == \t\q\f\u\w\v\b\b\5\e\t\y\n\b\k\5\y\9\l\j\o\5\d\4\f\2\c\g\a\v\o\g\9\v\s\3\c\x\a\7\m\s\d\n\1\n\s\7\9\m\g\t\e\w\w\x\l\0\a\c\g\o\v\u\2\8\p\8\m\w\n\7\4\3\u\c\k\3\b\a\s\e\s\3\o\x\k\p\v\7\y\6\y\w\b\7\a\a\9\1\o\p\4\u\u\e\v\l\g\9\z\r\s\o\o\q\u\e\4\2\k\7\q\e\1\y\j\b\d\x\9\t\9\i\c\d\d\g\3\w\z\v\6\8\t\j\d\o\g\l\l\o\d\y\v\e\3\3\h\7\j\6\5\w\8\d\4\c\f\6\z\8\i\9\i\c\f\h\6\l\r\u\d\v\q\g\y\1\8\q\i\d\h\d\p\v\e\u\8\2\y\n\b\3\1\v\j\o\6\v\v\e\b\e\0\7\b\y\r\2\1\q\u\d\l\4\c\1\p\9\r\n\y\z\p\s\5\c\l\t\r\d\b\z\6\i\1\y\r\4\9\d\p\v\5\4\8\i\6\c\j\v\i\a\p\v\c\a\b\m\7\q\z\j\k\v\h\6\2\d\a\a\7\s\v\l\5\7\l\0\l\f\d\n\s\f\5\i\8\r\7\b\6\3\g\j\s\r\r\6\5\a\i\3\0\p\8\4\z\n\2\c\l\k\1\n\p\z\u\1\z\m\d\b\g\4\m\q\m\s\g\c\4\z\c\6\e\1\9\6\i\l\h\o\0\r\a\r\p\2\y\0\v\4\8\g\0\y\9\r\u\q\4\v\6\k\v\e\d\3\r\u\4\r\a\o\p\s\k\i\l\p\w\k\d\q\a\k\h\c\2\m\t\1\c\2\8\8\c\e\s\5\n\0\p\n\t\x\h\z\h\a\a\f\j\a\g\7\h\f\6\z\g\l\f\b\o\9\a\u\6\3\y\l\h\3\e\n\b\f\q\g\d\0\8\b\j\x\o\v\2\a\v\r\w\g\j\6\b\m\t\g\t\f\a\j\u\f\t\g\h\6\d\p\a\g\w\j\s\8\u\u\q\i\s\w\y\0\z\o\b\2\1\j\t\j\f\z\f\1\i\2\m\9\7\y\1\o\m\z\r\e\0\d\0\d\t\2\7\5\u\5\g\e\t\q\8\k\m\r\a\1\m\t\u\9\k\g\c\u\8\1\a\x\d\3\l\j\6\2\d\y\w\4\r\j\l\u\4\v\x\b\g\h\x\w\w\z\q\m\i\a\7\h\d\p\o\w\p\i\t\e\j\0\d\v\2\n\e\b\2\t\b\w\5\y\t\c\3\8\3\v\w\a\p\j\u\n\u\s\7\y\q\b\n\6\2\l\x\s\u\n\f\c\i\z\b\b\w\s\p\t\7\o\6\a\b\6\r\c\0\u\j\k\z\e\f\y\k\0\5\9\7\6\z\6\e\8\4\a\6\n\v\q\z\6\q\m\z\r\c\1\f\8\f\3\p\g\x\3\b\r\a\6\l\3\c\g\o\i\v\g\x\m\m\7\p\b\6\r\i\d\p\y\6\d\i\2\i\b\3\v\f\u\t\8\8\h\y\l\g\1\9\7\y\d\l\i\n\z\i\b\8\f\8\n\e\l\d\k\c\h\v\2\r\9\4\t\4\g\5\h\c\g\l\y\0\6\a\1\0\2\7\e\1\y\a\h\w\z\2\l\b\y\1\k\d\3\c\8\t\x\s\a\m\1\7\y\6\j\2\5\e\z\e\7\g\c\9\u\x\g\e\x\5\k\g\p\3\m\4\b\n\r\6\o\q\q\v\5\8\s\8\t\n\3\m\w\y\z\q\a\9\4\a\t\j\u\d\f\3\y\m\s\6\9\s\b\7\l\j\j\o\4\w\9\9\s\l\h\3\f\6\v\y\u\l\j\9\n\k\q\n\c\m\1\a\7\k\b\7\e\g\b\d\8\u\g\0\v\0\8\c\k\x\o\b\i\0\3\q\m\g\3\o\c\a\b\b\l\j\2\b\c\3\d\x\p\k\7\9\b\z\0\3\t\u\6\u\6\7\d\6\5\u\u\a\1\6\4\t\q\h\f\f\r\a\6\k\e\r\c\p\r\w\z\0\o\m\r\b\i\8\j\e\t\9\t\s\4\9\j\h\s\0\g\x\d\i\b\n\6\g\j\p\r\7\i\m\3\g\w\u\1\g\l\c\8\t\z\g\n\t\z\1\d\j\c\q\6\3\p\b\o\p\c\b\c\z\x\8\h\w\k\s\f\f\u\t\e\z\i\u\6\n\j\l\y\5\3\e\2\r\k\1\5\r\t\o\3\h\o\i\b\p\r\l\n\1\8\1\5\h\h\9\n\5\x\0\c\q\k\p\6\k\l\l\f\6\7\l\1\a\8\p\n\f\j\k\u\e\k\o\m\q\o\i\w\r\y\h\l\t\f\s\t\h\k\j\d\y\0\t\2\7\n\z\6\j\k\9\p\l\q\r\w\y\4\d\9\x\v\j\v\k\o\y\5\a\5\3\d\i\7\n\j\y\v\s\u\8\s\h\o\c\r\w\8\w\f\6\7\a\z\c\2\l\9\y\h\n\6\e\a\7\h\e\j\d\i\c\8\9\0\2\c\1\2\u\y\2\x\5\p\u\n\3\w\k\n\a\b\o\i\7\h\p\z\s\l\y\7\y\r\o\i\r\r\k\6\7\j\0\t\k\3\v\r\n\6\d\x\5\g\b\f\0\v\q\b\1\e\g\r\2\1\v\b\u\p\q\n\c\q\9\0\t\g\f\h\m\z\2\f\1\d\n\1\w\s\5\u\9\o\3\0\c\n\l\t\7\y\d\m\0\w\e\s\m\m\h\7\8\x\q\7\6\e\1\o\d\t\4\1\7\f\p\r\m\f\i\z\0\s\9\2\l\v\u\o\7\s\u\w\p\f\q\c\m\i\z\d\5\7\7\s\e\y\5\u\p\a\h\d\q\f\k\l\5\c\i\w\b\w\2\w\7\t\4\e\o\x\o\y\p\g\8\8\k\h\s\5\5\r\b\9\k\s\j\c\b\7\n\e\s\f\e\v\6\x\i\j\e\0\d\d\e\c\n\x\6\b\o\x\f\1\4\v\g\p\z\x\l\0\z\9\1\k\l\f\y\2\k\c\r\b\4\d\u\t\7\l\5\o\s\n\7\g\x\a\5\0\h\q\0\c\p\j\f\n\g\j\p\t\i\k\a\l\d\j\1\0\0\u\n\l\x\u\k\m\v\v\k\v\n\1\n\d\s\z\v\f\k\a\9\e\s\x\7\i\d\5\z\w\d\f\a\8\7\x\x\6\3\b\y\4\g\y\6\h\1\m\k\j\z\j\d\b\t\6\p\2\i\s\h\0\0\t\t\a\8\k\2\m\8\s\1\5\o\y\k\n\1\h\1\l\q\g\l\i\8\m\f\o\v\0\x\2\y\c\l\6\4\q\x\q\z\t\w\e\r\s\i\b\v\h\j\a\2\m\t\p\t\u\p\v\3\7\2\g\v\r\t\6\p\b\e\9\f\9\4\w\s\8\u\p\o\c\e\q\h\x\t\y\7\2\m\m\v\p\m\l\9\0\7\p\l\j\z\r\y\a\8\y\s\4\x\s\3\l\u\x\o\9\z\l\7\j\y\v\s\o\1\z\z\r\7\8\q\q\b\l\2\b\4\8\r\f\p\f\6\9\d\5\1\4\g\4\3\s\l\q\v\x\8\w\4\h\g\5\z\b\g\4\p\7\y\x\p\8\i\7\m\9\0\8\n\s\j\9\6\0\x\g\x\t\u\c\r\s\w\r\m\u\g\r\f\q\a\e\q\p\v\h\g\w\r\0\9\4\x\9\s\k\j\u\j\2\k\3\t\e\d\a\g\j\g\w\g\v\q\5\7\j\m\2\8\h\z\8\3\1\r\d\o\9\n\n\8\x\8\3\q\5\q\7\6\j\a\r\v\8\t\v\4\8\q\v\a\8\g\9\4\3\g\j\w\3\v\0\a\z\f\z\q\a\l\u\d\9\j\n\u\7\x\1\e\5\x\2\4\i\9\9\0\r\e\3\6\a\8\7\v\y\e\s\8\u\s\t\8\f\8\q\2\e\r\0\i\n\d\k\c\8\s\n\2\s\s\l\9\o\a\f\l\f\d\o\d\u\5\s\c\2\u\7\1\q\5\y\7\s\1\0\a\2\5\v\1\o\0\r\k\e\h\c\v\x\q\a\j\f\x\x\9\w\c\q\u\1\7\r\1\u\w\x\q\0\d\r\g\c\2\u\6\b\0\d\0\l\m\f\k\m\8\o\g\l\9\f\0\1\t\4\q\9\r\1\x\t\i\o\1\b\n\5\h\k\m\e\p\3\0\8\b\2\6\q\6\t\5\o\g\s\d\6\4\s\p\0\q\m\c\r\z\5\t\8\3\q\7\t\9\z\s\k\6\v\h\p\d\a\3\n\j\h\h\b\p\t\n\2\a\a\4\a\e\a\0\h\0\s\3\s\o\b\r\g\p\r\6\w\n\g\r\w\s\7\r\c\8\m\1\a\y\p\7\j\n\j\r\1\o\z\u\w\n\i\4\z\z\b\o\c\t\h\s\m\c\v\r\g\b\9\y\m\3\u\b\y\1\j\e\v\n\r\w\x\d\v\8\d\e\v\u\c\w\p\0\u\9\h\v\s\4\d\b\4\g\y\i\c\x\6\d\7\1\c\g\3\l\f\p\w\f\3\j\n\4\x\4\h\a\0\n\4\j\m\c\n\x\s\o\5\e\9\v\m\w\3\g\q\g\o\k\h\m\c\q\j\i\8\4\k\l\3\p\t\e\9\x\q\b\7\k\3\2\p\z\k\p\p\x\c\x\w\o\j\k\t\x\4\n\d\o\z\k\e\j\r\o\m\v\m\x\k\4\m\j\f\x\s\k\l\z\l\a\v\v\w\x\a\x\3\x\h\b\o\a\l\k\4\z\r\1\j\v\t\4\t\6\x\u\2\h\7\x\9\w\m\y\b\w\1\4\5\j\o\h\l\5\6\6\y\e\t\3\s\m\i\o\t\q\v\f\a\n\w\b\v\c\5\2\l\0\d\3\g\t\2\6\i\t\v\e\x\d\s\9\4\g\z\p\c\2\1\s\s\t\z\1\o\d\3\o\7\1\0\d\q\b\t\r\b\9\4\p\9\i\6\l\q\7\c\r\9\7\9\n\d\x\x\c\v\z\1\0\1\k\y\t\e\t\8\4\c\6\b\n\g\c\6\d\y\6\m\p\i\1\r\u\r\3\f\z\y\1\i\w\6\d\9\b\y\u\s\h\k\z\h\9\f\1\z\w\u\6\i\d\f\q\7\5\1\k\2\n\v\c\b\4\j\u\f\5\n\1\j\u\a\s\z\0\m\g\g\w\x\n\h\1\u\k\n\k\9\7\5\9\i\g\3\e\8\t\6\k\j\3\0\6\6\a\c\3\w\5\z\x\j\2\6\x\f\s\5\r\s\m\j\x\s\j\u\p\6\6\g\q\7\3\5\i\n\c\d\8\p\x\2\i\q\e\k\g\i\7\u\z\5\7\b\l\w\u\3\4\8\2\1\s\4\x\5\q\g\k\q\x\2\3\o\z\6\z\1\a\s\2\8\k\e\z\0\5\q\2\1\i\e\j\e\d\q\7\2\j\x\z\j\2\n\e\c\0\q\q\g\d\c\3\d\n\z\k\1\x\6\a\o\s\y\8\i\k\j\4\c\i\b\b\6\2\i\9\3\l\4\h\i\6\r\2\2\l\i\r\d\5\0\o\i\g\a\p\s\f\j\g\z\h\9\b\b\w\5\g\x\t\d\1\w\6\n\9\c\w\l\z\g\1\l\h\z\q\a\0\0\q\h\1\i\6\4\2\4\w\v\z\j\u\7\7\i\k\w\x\m\k\r\z\d\p\w\9\1\g\n\8\x\y\c\9\q\1\p\u\1\j\3\d\m\h\0\j\i\x\x\c\6\u\q\a\n\r\e\n\f\f\4\4\v\v\a\4\z\a\v\b\g\3\p\m\3\h\b\q\w\u\p\z\7\o\y\k\m\j\6\m\b\z\c\j\b\a\e\j\z\y\4\f\u\6\6\9\k\f\r\m\0\g\c\y\9\l\x\6\q\r\t\d\7\1\x\7\0\e\b\c\4\o\2\p\m\y\z\6\1\5\0\i\k\2\o\3\n\r\0\m\8\z\b\4\0\2\g\8\4\z\8\m\0\7\j\j\7\5\0\c\e\o\2\8\p\5\5\k\0\s\u\m\g\p\7\r\m\r\n\9\h\x\r\f\w\o\5\6\9\g\u\o\x\y\x\3\4\u\l\0\1\q\6\f\q\1\g\0\i\g\l\n\7\u\7\y\h\w\n\o\3\2\1\w\1\4\4\6\z\b\2\c\3\v\i\g\b\a\q\g\e\n\o\2\h\k\s\3\k\e\c\l\5\i\i\n\h\a\5\s\3\3\c\r\9\f\e\t\g\p\c\5\g\c\z\l\e\g\1\m\a\9\v\g\g\s\g\g\4\w\1\a\y\l\a\q\s\a\5\f\c\2\w\u\p\k\3\u\f\n\b\r\c\3\5\x\d\y\7\u\4\o\f\g\m\2\l\p\x\3\z\m\o\4\l\j\c\8\k\q\v\k\e\0\5\r\o\3\o\i\0\c\j\i\e\r\d\l\o\s\i\q\m\y\y\8\i\g\5\k\a\u\l\p\f\5\6\3\7\9\0\4\m\f\y\5\s\t\k\o\9\8\3\c\i\v\q\5\9\w\i\x\8\s\h\l\s\1\s\d\c\u\3\b\p\x\y\y\x\p\p\q\v\t\9\e\7\a\s\v\x\r\g\u\o\j\h\y\f\v\d\m\w\3\0\1\4\v\w\u\v\v\n\0\i\l\x\g\1\i\r\f\v\0\b\2\w\v\5\o\9\z\y\o\d\4\f\q\2\5\w\e\l\t\i\6\g\r\s\a\7\u\j\f\y\c\s\g\v\1\k\p\e\5\7\k\v\q\o\2\4\a\j\5\t\f\f\t\v\n\6\b\d\z\o\b\9\1\c\2\c\a\l\2\j\q\t\g\1\x\s\j\n\w\m\0\7\8\6\t\m\9\3\a\7\i\m\s\w\b\b\v\8\u\m\t\6\r\h\i\r\u\5\t\y\2\6\g\m\8\o\w\0\f\o\o\o\x\l\u\b\8\u\6\p\6\n\k\c\7\u\d\p\y\g\k\g\k\5\n\m\r\f\g\w\q\a\g\2\3\t\s\p\u\y\2\e\8\4\v\o\7\8\h\8\p\d\1\1\2\m\k\x\k\6\f\n\i\1\w\v\9\l\c\z\s\n\n\a\b\m\v\w\x\p\i\f\p\b\b\x\u\k\t\2\4\p\m\5\r\9\y\q\r\4\v\l\2\v\i\i\o\v\t\z\u\o\0\d\y\h\3\7\n\k\n\s\c\3\n\2\9\g\f\1\a\h\t\h\o\h\d\q\6\8\z\r\4\2\e\8\t\o\a\j\4\p\1\0\k\u\f\l\t\3\q\n\z\4\r\v\0\y\o\3\m\w\s\n\g\k\k\3\x\z\b\h\s\j\9\k\c\g\k\l\w\o\z\s\y\6\m\y\r\3\4\u\5\z\k\e\o\7\s\e\1\p\y\p\k\r\o\t\2\v\v\4\2\0\h\i\l\q\5\p\4\8\x\m\2\o\b\2\e\w\6\s\2\3\0\s\n\p\2\b\2\1\k\s\x\r\q\l\5\7\i\x\i\9\l\s\1\t\s\b\s\6\l\i\u\i\w\x\e\e\1\4\u\q\w\t\q\r\f\w\n\v\7\x\f\u\v\1\t\x\9\9\m\t\y\0\j\1\x\i\p\e\2\j\c\9\3\f\t\k\v\s\6\e\w\q\k\y\s\u\o\0\l\d\f\j\6\2\6\q\7\g\1\m\8\q\4\m\a\p\r\c\h\1\j\a\7\c\v\q\s\h\b\9\9\w\2\a\z\h\l\w\w\n\7\v\3\k\3\6\k\x\n\t\w\3\s\m\r\x\s\4\x\n\o\p\7\i\s\h\t\9\9\v\8\m\d\l\v\d\b\y\i\f\x\1\4\o\3\j\1\1\s\d\9\5\q\g\y\i\l\3\3\d\h\v\f\3\z\q\p\m\4\u\n\6\4\e\p\f\l\3\h\j\f\r\b\1\l\d\x\t\2\r\1\a\g\1\g\s\1\u\8\a\u\j\s\9\4\j\l\p\v\1\4\x\u\w\9\m\5\c\6\a\3\t\t\g\k\p\i\d\d\2\c\4\w\k\6\e\7\c\0\5\z\p\m\h\x\w\u\4\3\v\v\e\r\m\9\g\3\d\t\5\3\s\g\w\r\d\k\1\2\r\g\2\4\8\4\j\n\0\8\3\v\a\f\p\x\q\x\a\u\z\i\1\b\c\g\b\0\o\7\s\k\l\e\l\o\q\f\2\5\l\v\r\i\u\6\l\9\m\d\d\m\d\z\z\d\t\n\h\h\t\7\8\e\4\n\b\7\9\4\z\n\c\n\x\w\b\1\4\2\i\s\x\0\h\5\s\6\5\u\h\a\k\b\t\m\r\2\s\z\y\v\8\3\j\2\2\9\s\k\1\s\c\d\8\q\y\n\3\h\j\g\h\x\n\0\x\o\g\f\l\w\8\e\p\y\d\0\t\w\r\k\h\g\b\6\b\b\p\d\c\d\y\m\9\l\i\v\5\v\i\f\b\b\y\c\4\k\e\h\e\9\p\0\3\2\k\m\3\m\0\t\a\z\a\w\r\v\x\6\q\v\i\9\9\c\p\v\b\j\d\m\m\z\m\4\n\m\h\9\7\i\l\5\i\2\8\k\c\b\u\b\z\l\8\h\a\6\4\8\d\l\o\8\t\b\p\x\b\0\y\8\6\v\f\m\v\i\9\s\7\t\3\w\8\r\2\g\r\8\o\c\8\5\p\4\h\t\9\v\j\i\n\8\h\x\q\6\a\g\v\w\a\i\3\m\s\k\1\r\q\x\9\v\t\u\z\i\k ]] 00:08:11.199 00:08:11.199 real 0m1.453s 00:08:11.199 user 0m1.013s 00:08:11.199 sys 0m0.335s 00:08:11.199 10:13:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.199 ************************************ 00:08:11.199 END TEST dd_rw_offset 00:08:11.199 ************************************ 00:08:11.199 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.199 10:13:24 -- dd/basic_rw.sh@1 -- # cleanup 00:08:11.199 10:13:24 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:11.199 10:13:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.199 10:13:24 -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.199 10:13:24 -- dd/common.sh@12 -- # local size=0xffff 00:08:11.199 10:13:24 -- dd/common.sh@14 -- # local bs=1048576 00:08:11.199 10:13:24 -- dd/common.sh@15 -- # local count=1 00:08:11.199 10:13:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.199 10:13:24 -- dd/common.sh@18 -- # gen_conf 00:08:11.199 10:13:24 -- dd/common.sh@31 -- # xtrace_disable 00:08:11.199 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.199 [2024-07-26 10:13:24.503565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:11.199 [2024-07-26 10:13:24.503683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69683 ] 00:08:11.199 { 00:08:11.199 "subsystems": [ 00:08:11.199 { 00:08:11.199 "subsystem": "bdev", 00:08:11.199 "config": [ 00:08:11.199 { 00:08:11.199 "params": { 00:08:11.199 "trtype": "pcie", 00:08:11.199 "traddr": "0000:00:06.0", 00:08:11.199 "name": "Nvme0" 00:08:11.199 }, 00:08:11.199 "method": "bdev_nvme_attach_controller" 00:08:11.199 }, 00:08:11.199 { 00:08:11.199 "method": "bdev_wait_for_examine" 00:08:11.199 } 00:08:11.199 ] 00:08:11.199 } 00:08:11.199 ] 00:08:11.199 } 00:08:11.199 [2024-07-26 10:13:24.638877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.457 [2024-07-26 10:13:24.745552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.715  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.715 00:08:11.715 10:13:25 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.715 ************************************ 00:08:11.715 END TEST spdk_dd_basic_rw 00:08:11.715 ************************************ 00:08:11.715 00:08:11.715 real 0m19.429s 00:08:11.715 user 0m13.909s 00:08:11.715 sys 0m4.095s 00:08:11.715 10:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.715 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 10:13:25 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:11.974 10:13:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.974 10:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.974 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 ************************************ 00:08:11.974 START TEST spdk_dd_posix 00:08:11.974 ************************************ 00:08:11.974 10:13:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:11.974 * Looking for test storage... 00:08:11.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:11.974 10:13:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.974 10:13:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.974 10:13:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.974 10:13:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.974 10:13:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.974 10:13:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.974 10:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.974 10:13:25 -- paths/export.sh@5 -- # export PATH 00:08:11.974 10:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.974 10:13:25 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:11.974 10:13:25 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:11.975 10:13:25 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:11.975 10:13:25 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:11.975 10:13:25 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.975 10:13:25 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.975 10:13:25 -- dd/posix.sh@130 -- # tests 00:08:11.975 10:13:25 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:11.975 * First test run, liburing in use 00:08:11.975 10:13:25 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:11.975 10:13:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.975 10:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.975 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.975 ************************************ 00:08:11.975 START TEST dd_flag_append 00:08:11.975 ************************************ 00:08:11.975 10:13:25 -- common/autotest_common.sh@1104 -- # append 00:08:11.975 10:13:25 -- dd/posix.sh@16 -- # local dump0 00:08:11.975 10:13:25 -- dd/posix.sh@17 -- # local dump1 00:08:11.975 10:13:25 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:11.975 10:13:25 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.975 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.975 10:13:25 -- dd/posix.sh@19 -- # dump0=dv42oehkur3f0r350eove6xay0307gxg 00:08:11.975 10:13:25 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:11.975 10:13:25 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.975 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.975 10:13:25 -- dd/posix.sh@20 -- # dump1=l0ygtk4cupddjdce5dwj1pyu9thxd24z 00:08:11.975 10:13:25 -- dd/posix.sh@22 -- # printf %s dv42oehkur3f0r350eove6xay0307gxg 00:08:11.975 10:13:25 -- dd/posix.sh@23 -- # printf %s l0ygtk4cupddjdce5dwj1pyu9thxd24z 00:08:11.975 10:13:25 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:11.975 [2024-07-26 10:13:25.320829] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:11.975 [2024-07-26 10:13:25.320930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69745 ] 00:08:12.233 [2024-07-26 10:13:25.453146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.233 [2024-07-26 10:13:25.558346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.491  Copying: 32/32 [B] (average 31 kBps) 00:08:12.491 00:08:12.491 10:13:25 -- dd/posix.sh@27 -- # [[ l0ygtk4cupddjdce5dwj1pyu9thxd24zdv42oehkur3f0r350eove6xay0307gxg == \l\0\y\g\t\k\4\c\u\p\d\d\j\d\c\e\5\d\w\j\1\p\y\u\9\t\h\x\d\2\4\z\d\v\4\2\o\e\h\k\u\r\3\f\0\r\3\5\0\e\o\v\e\6\x\a\y\0\3\0\7\g\x\g ]] 00:08:12.491 00:08:12.491 ************************************ 00:08:12.491 END TEST dd_flag_append 00:08:12.491 ************************************ 00:08:12.491 real 0m0.648s 00:08:12.491 user 0m0.370s 00:08:12.491 sys 0m0.149s 00:08:12.491 10:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.491 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.749 10:13:25 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:12.749 10:13:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.749 10:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.749 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.749 ************************************ 00:08:12.749 START TEST dd_flag_directory 00:08:12.749 ************************************ 00:08:12.750 10:13:25 -- common/autotest_common.sh@1104 -- # directory 00:08:12.750 10:13:25 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.750 10:13:25 -- common/autotest_common.sh@640 -- # local es=0 00:08:12.750 10:13:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.750 10:13:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.750 10:13:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.750 10:13:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.750 10:13:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.750 10:13:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.750 10:13:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.750 10:13:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.750 10:13:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.750 10:13:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.750 [2024-07-26 10:13:26.023536] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:12.750 [2024-07-26 10:13:26.023645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:08:12.750 [2024-07-26 10:13:26.159330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.007 [2024-07-26 10:13:26.268175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.007 [2024-07-26 10:13:26.364268] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.007 [2024-07-26 10:13:26.364343] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.007 [2024-07-26 10:13:26.364360] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.266 [2024-07-26 10:13:26.484487] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.266 10:13:26 -- common/autotest_common.sh@643 -- # es=236 00:08:13.266 10:13:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.266 10:13:26 -- common/autotest_common.sh@652 -- # es=108 00:08:13.266 10:13:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.266 10:13:26 -- common/autotest_common.sh@660 -- # es=1 00:08:13.266 10:13:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.266 10:13:26 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.266 10:13:26 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.266 10:13:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.266 10:13:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.266 10:13:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.266 10:13:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.266 10:13:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.266 10:13:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.266 10:13:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.266 10:13:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.266 10:13:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.266 10:13:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.266 [2024-07-26 10:13:26.627519] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:13.266 [2024-07-26 10:13:26.627666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69781 ] 00:08:13.525 [2024-07-26 10:13:26.761850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.525 [2024-07-26 10:13:26.865240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.525 [2024-07-26 10:13:26.954526] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.525 [2024-07-26 10:13:26.954615] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.525 [2024-07-26 10:13:26.954632] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.783 [2024-07-26 10:13:27.071758] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.783 10:13:27 -- common/autotest_common.sh@643 -- # es=236 00:08:13.783 10:13:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.783 10:13:27 -- common/autotest_common.sh@652 -- # es=108 00:08:13.783 10:13:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.783 10:13:27 -- common/autotest_common.sh@660 -- # es=1 00:08:13.783 ************************************ 00:08:13.783 END TEST dd_flag_directory 00:08:13.783 ************************************ 00:08:13.783 10:13:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.783 00:08:13.783 real 0m1.199s 00:08:13.783 user 0m0.679s 00:08:13.783 sys 0m0.306s 00:08:13.783 10:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.783 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.783 10:13:27 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:13.783 10:13:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.783 10:13:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.783 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.783 ************************************ 00:08:13.783 START TEST dd_flag_nofollow 00:08:13.783 ************************************ 00:08:13.783 10:13:27 -- common/autotest_common.sh@1104 -- # nofollow 00:08:13.783 10:13:27 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.783 10:13:27 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.783 10:13:27 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.783 10:13:27 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.783 10:13:27 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.783 10:13:27 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.783 10:13:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.783 10:13:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.783 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.783 10:13:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.783 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.783 10:13:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.783 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.783 10:13:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.783 10:13:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.783 10:13:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.042 [2024-07-26 10:13:27.275648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:14.042 [2024-07-26 10:13:27.275753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69815 ] 00:08:14.042 [2024-07-26 10:13:27.410398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.300 [2024-07-26 10:13:27.510398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.300 [2024-07-26 10:13:27.597973] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:14.300 [2024-07-26 10:13:27.598033] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:14.300 [2024-07-26 10:13:27.598050] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.300 [2024-07-26 10:13:27.717329] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:14.559 10:13:27 -- common/autotest_common.sh@643 -- # es=216 00:08:14.559 10:13:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:14.559 10:13:27 -- common/autotest_common.sh@652 -- # es=88 00:08:14.559 10:13:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:14.559 10:13:27 -- common/autotest_common.sh@660 -- # es=1 00:08:14.559 10:13:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:14.559 10:13:27 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.559 10:13:27 -- common/autotest_common.sh@640 -- # local es=0 00:08:14.559 10:13:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.559 10:13:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.559 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:14.559 10:13:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.559 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:14.559 10:13:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.559 10:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:14.559 10:13:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.559 10:13:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.559 10:13:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.559 [2024-07-26 10:13:27.872247] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:14.559 [2024-07-26 10:13:27.872373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69819 ] 00:08:14.559 [2024-07-26 10:13:28.010258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.818 [2024-07-26 10:13:28.115785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.818 [2024-07-26 10:13:28.209407] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.818 [2024-07-26 10:13:28.209475] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.818 [2024-07-26 10:13:28.209493] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.076 [2024-07-26 10:13:28.328647] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:15.076 10:13:28 -- common/autotest_common.sh@643 -- # es=216 00:08:15.076 10:13:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:15.076 10:13:28 -- common/autotest_common.sh@652 -- # es=88 00:08:15.076 10:13:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:15.076 10:13:28 -- common/autotest_common.sh@660 -- # es=1 00:08:15.076 10:13:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:15.076 10:13:28 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:15.076 10:13:28 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.076 10:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.076 10:13:28 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.076 [2024-07-26 10:13:28.493318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:15.076 [2024-07-26 10:13:28.493803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69832 ] 00:08:15.334 [2024-07-26 10:13:28.628061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.334 [2024-07-26 10:13:28.739757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.850  Copying: 512/512 [B] (average 500 kBps) 00:08:15.850 00:08:15.850 10:13:29 -- dd/posix.sh@49 -- # [[ dhiwxx0olqbhglc7ss2krgb8jv4oj4t7weyuxl6z8r812o0l0qvqa71weml6u0b5kif502ezr2ix8av4vgouvlkds1gb5r8sp5d0moc8jlyifb3nsx9peqp6qf3k8db8nd8paxctpvlilrmdd5uo2mb59c70fr08s3ulugdo5c0tdyyyy25xkzdkvuovfoioyj9ftnvzp1e7l0oxt4b15siycuymwua92xhg27jw13fy601q5ufem44zzy1xug2578pjx63mtoac1y8oa6r1k830i5muck12vfq1gf76ib1oife5mx5oy94g2us58dmfkptvyalb2f4y2qyjli8y0ii44gc6d87th8l1j3sp7yof10ykky0a2ogw8dcpqgw0gg2njs93d99kwp87st0yjg1tb8kt9xzcafpxgw0djlyptvsvttwn536zoj3aov5nnrf4a5psqz5doezjebmzt6w4ghmv5ydba0b35n3755x1xpcsnbgdwxfgmba3b9nu == \d\h\i\w\x\x\0\o\l\q\b\h\g\l\c\7\s\s\2\k\r\g\b\8\j\v\4\o\j\4\t\7\w\e\y\u\x\l\6\z\8\r\8\1\2\o\0\l\0\q\v\q\a\7\1\w\e\m\l\6\u\0\b\5\k\i\f\5\0\2\e\z\r\2\i\x\8\a\v\4\v\g\o\u\v\l\k\d\s\1\g\b\5\r\8\s\p\5\d\0\m\o\c\8\j\l\y\i\f\b\3\n\s\x\9\p\e\q\p\6\q\f\3\k\8\d\b\8\n\d\8\p\a\x\c\t\p\v\l\i\l\r\m\d\d\5\u\o\2\m\b\5\9\c\7\0\f\r\0\8\s\3\u\l\u\g\d\o\5\c\0\t\d\y\y\y\y\2\5\x\k\z\d\k\v\u\o\v\f\o\i\o\y\j\9\f\t\n\v\z\p\1\e\7\l\0\o\x\t\4\b\1\5\s\i\y\c\u\y\m\w\u\a\9\2\x\h\g\2\7\j\w\1\3\f\y\6\0\1\q\5\u\f\e\m\4\4\z\z\y\1\x\u\g\2\5\7\8\p\j\x\6\3\m\t\o\a\c\1\y\8\o\a\6\r\1\k\8\3\0\i\5\m\u\c\k\1\2\v\f\q\1\g\f\7\6\i\b\1\o\i\f\e\5\m\x\5\o\y\9\4\g\2\u\s\5\8\d\m\f\k\p\t\v\y\a\l\b\2\f\4\y\2\q\y\j\l\i\8\y\0\i\i\4\4\g\c\6\d\8\7\t\h\8\l\1\j\3\s\p\7\y\o\f\1\0\y\k\k\y\0\a\2\o\g\w\8\d\c\p\q\g\w\0\g\g\2\n\j\s\9\3\d\9\9\k\w\p\8\7\s\t\0\y\j\g\1\t\b\8\k\t\9\x\z\c\a\f\p\x\g\w\0\d\j\l\y\p\t\v\s\v\t\t\w\n\5\3\6\z\o\j\3\a\o\v\5\n\n\r\f\4\a\5\p\s\q\z\5\d\o\e\z\j\e\b\m\z\t\6\w\4\g\h\m\v\5\y\d\b\a\0\b\3\5\n\3\7\5\5\x\1\x\p\c\s\n\b\g\d\w\x\f\g\m\b\a\3\b\9\n\u ]] 00:08:15.850 ************************************ 00:08:15.850 END TEST dd_flag_nofollow 00:08:15.850 ************************************ 00:08:15.850 00:08:15.850 real 0m1.837s 00:08:15.850 user 0m1.051s 00:08:15.850 sys 0m0.450s 00:08:15.850 10:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.850 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:08:15.850 10:13:29 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:15.850 10:13:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:15.850 10:13:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.850 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:08:15.850 ************************************ 00:08:15.850 START TEST dd_flag_noatime 00:08:15.850 ************************************ 00:08:15.850 10:13:29 -- common/autotest_common.sh@1104 -- # noatime 00:08:15.850 10:13:29 -- dd/posix.sh@53 -- # local atime_if 00:08:15.850 10:13:29 -- dd/posix.sh@54 -- # local atime_of 00:08:15.850 10:13:29 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:15.850 10:13:29 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.850 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:08:15.850 10:13:29 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.851 10:13:29 -- dd/posix.sh@60 -- # atime_if=1721988808 00:08:15.851 10:13:29 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.851 10:13:29 -- dd/posix.sh@61 -- # atime_of=1721988809 00:08:15.851 10:13:29 -- dd/posix.sh@66 -- # sleep 1 00:08:16.785 10:13:30 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.785 [2024-07-26 10:13:30.178961] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:16.785 [2024-07-26 10:13:30.179094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69873 ] 00:08:17.043 [2024-07-26 10:13:30.318886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.043 [2024-07-26 10:13:30.443030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.558  Copying: 512/512 [B] (average 500 kBps) 00:08:17.558 00:08:17.558 10:13:30 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.558 10:13:30 -- dd/posix.sh@69 -- # (( atime_if == 1721988808 )) 00:08:17.558 10:13:30 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.558 10:13:30 -- dd/posix.sh@70 -- # (( atime_of == 1721988809 )) 00:08:17.558 10:13:30 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.558 [2024-07-26 10:13:30.838476] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:17.558 [2024-07-26 10:13:30.838608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69884 ] 00:08:17.558 [2024-07-26 10:13:30.975674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.816 [2024-07-26 10:13:31.086556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.074  Copying: 512/512 [B] (average 500 kBps) 00:08:18.074 00:08:18.074 10:13:31 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.074 10:13:31 -- dd/posix.sh@73 -- # (( atime_if < 1721988811 )) 00:08:18.074 00:08:18.074 real 0m2.341s 00:08:18.074 user 0m0.757s 00:08:18.074 sys 0m0.337s 00:08:18.074 10:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.074 ************************************ 00:08:18.074 END TEST dd_flag_noatime 00:08:18.074 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:08:18.074 ************************************ 00:08:18.074 10:13:31 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:18.074 10:13:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.074 10:13:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.074 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:08:18.074 ************************************ 00:08:18.074 START TEST dd_flags_misc 00:08:18.074 ************************************ 00:08:18.074 10:13:31 -- common/autotest_common.sh@1104 -- # io 00:08:18.074 10:13:31 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:18.074 10:13:31 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:18.074 10:13:31 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:18.074 10:13:31 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:18.074 10:13:31 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:18.074 10:13:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.074 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:08:18.074 10:13:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.074 10:13:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:18.332 [2024-07-26 10:13:31.567478] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:18.332 [2024-07-26 10:13:31.567689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69916 ] 00:08:18.332 [2024-07-26 10:13:31.713495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.588 [2024-07-26 10:13:31.835370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.852  Copying: 512/512 [B] (average 500 kBps) 00:08:18.852 00:08:18.852 10:13:32 -- dd/posix.sh@93 -- # [[ 1ngdhjouf1tcmn2vkt6nmxnkhk43of3jmrid9zavob9vjsl9aoukpza12k7ucf3ypjsqpz6z6o0jl22pkimzn44hylwozl76si8u8q2gijlzilzbkguzczw3rb5nmlp1mpbxux6mx0ztsk0m4k68zg3x7y3lxy6fo26lm1rxsj9gtmjor05wsv5ojkbtalmj261nzjnegqgulpsij4cgpwggj7fkbzaqfc8hvdyrem9hdcpnr9um8sfv38r4o3g26hkscs85x6vtv31znkmhsutwv2pfhaos5oaqluj44f41ngai8vd9vkr5w0kgd8ui4vskge515i408ad1dr2qteiahb3q9kg0d3q93jrauk23bm0ikdrkzxiicmbxttk3a5ih9h4s9mcjdapgu6aixt5qucv7zbhtlj7rg73ilmwhzxmzlw9hq2y2v20f4dtleycvg24aeuvrezzyldsuyp364190iw40lqxwbyuj1k7y4fgymbhm44h0ahe2xcmv == \1\n\g\d\h\j\o\u\f\1\t\c\m\n\2\v\k\t\6\n\m\x\n\k\h\k\4\3\o\f\3\j\m\r\i\d\9\z\a\v\o\b\9\v\j\s\l\9\a\o\u\k\p\z\a\1\2\k\7\u\c\f\3\y\p\j\s\q\p\z\6\z\6\o\0\j\l\2\2\p\k\i\m\z\n\4\4\h\y\l\w\o\z\l\7\6\s\i\8\u\8\q\2\g\i\j\l\z\i\l\z\b\k\g\u\z\c\z\w\3\r\b\5\n\m\l\p\1\m\p\b\x\u\x\6\m\x\0\z\t\s\k\0\m\4\k\6\8\z\g\3\x\7\y\3\l\x\y\6\f\o\2\6\l\m\1\r\x\s\j\9\g\t\m\j\o\r\0\5\w\s\v\5\o\j\k\b\t\a\l\m\j\2\6\1\n\z\j\n\e\g\q\g\u\l\p\s\i\j\4\c\g\p\w\g\g\j\7\f\k\b\z\a\q\f\c\8\h\v\d\y\r\e\m\9\h\d\c\p\n\r\9\u\m\8\s\f\v\3\8\r\4\o\3\g\2\6\h\k\s\c\s\8\5\x\6\v\t\v\3\1\z\n\k\m\h\s\u\t\w\v\2\p\f\h\a\o\s\5\o\a\q\l\u\j\4\4\f\4\1\n\g\a\i\8\v\d\9\v\k\r\5\w\0\k\g\d\8\u\i\4\v\s\k\g\e\5\1\5\i\4\0\8\a\d\1\d\r\2\q\t\e\i\a\h\b\3\q\9\k\g\0\d\3\q\9\3\j\r\a\u\k\2\3\b\m\0\i\k\d\r\k\z\x\i\i\c\m\b\x\t\t\k\3\a\5\i\h\9\h\4\s\9\m\c\j\d\a\p\g\u\6\a\i\x\t\5\q\u\c\v\7\z\b\h\t\l\j\7\r\g\7\3\i\l\m\w\h\z\x\m\z\l\w\9\h\q\2\y\2\v\2\0\f\4\d\t\l\e\y\c\v\g\2\4\a\e\u\v\r\e\z\z\y\l\d\s\u\y\p\3\6\4\1\9\0\i\w\4\0\l\q\x\w\b\y\u\j\1\k\7\y\4\f\g\y\m\b\h\m\4\4\h\0\a\h\e\2\x\c\m\v ]] 00:08:18.852 10:13:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.852 10:13:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:18.852 [2024-07-26 10:13:32.211160] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:18.852 [2024-07-26 10:13:32.211282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69924 ] 00:08:19.114 [2024-07-26 10:13:32.345440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.114 [2024-07-26 10:13:32.454688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.371  Copying: 512/512 [B] (average 500 kBps) 00:08:19.371 00:08:19.371 10:13:32 -- dd/posix.sh@93 -- # [[ 1ngdhjouf1tcmn2vkt6nmxnkhk43of3jmrid9zavob9vjsl9aoukpza12k7ucf3ypjsqpz6z6o0jl22pkimzn44hylwozl76si8u8q2gijlzilzbkguzczw3rb5nmlp1mpbxux6mx0ztsk0m4k68zg3x7y3lxy6fo26lm1rxsj9gtmjor05wsv5ojkbtalmj261nzjnegqgulpsij4cgpwggj7fkbzaqfc8hvdyrem9hdcpnr9um8sfv38r4o3g26hkscs85x6vtv31znkmhsutwv2pfhaos5oaqluj44f41ngai8vd9vkr5w0kgd8ui4vskge515i408ad1dr2qteiahb3q9kg0d3q93jrauk23bm0ikdrkzxiicmbxttk3a5ih9h4s9mcjdapgu6aixt5qucv7zbhtlj7rg73ilmwhzxmzlw9hq2y2v20f4dtleycvg24aeuvrezzyldsuyp364190iw40lqxwbyuj1k7y4fgymbhm44h0ahe2xcmv == \1\n\g\d\h\j\o\u\f\1\t\c\m\n\2\v\k\t\6\n\m\x\n\k\h\k\4\3\o\f\3\j\m\r\i\d\9\z\a\v\o\b\9\v\j\s\l\9\a\o\u\k\p\z\a\1\2\k\7\u\c\f\3\y\p\j\s\q\p\z\6\z\6\o\0\j\l\2\2\p\k\i\m\z\n\4\4\h\y\l\w\o\z\l\7\6\s\i\8\u\8\q\2\g\i\j\l\z\i\l\z\b\k\g\u\z\c\z\w\3\r\b\5\n\m\l\p\1\m\p\b\x\u\x\6\m\x\0\z\t\s\k\0\m\4\k\6\8\z\g\3\x\7\y\3\l\x\y\6\f\o\2\6\l\m\1\r\x\s\j\9\g\t\m\j\o\r\0\5\w\s\v\5\o\j\k\b\t\a\l\m\j\2\6\1\n\z\j\n\e\g\q\g\u\l\p\s\i\j\4\c\g\p\w\g\g\j\7\f\k\b\z\a\q\f\c\8\h\v\d\y\r\e\m\9\h\d\c\p\n\r\9\u\m\8\s\f\v\3\8\r\4\o\3\g\2\6\h\k\s\c\s\8\5\x\6\v\t\v\3\1\z\n\k\m\h\s\u\t\w\v\2\p\f\h\a\o\s\5\o\a\q\l\u\j\4\4\f\4\1\n\g\a\i\8\v\d\9\v\k\r\5\w\0\k\g\d\8\u\i\4\v\s\k\g\e\5\1\5\i\4\0\8\a\d\1\d\r\2\q\t\e\i\a\h\b\3\q\9\k\g\0\d\3\q\9\3\j\r\a\u\k\2\3\b\m\0\i\k\d\r\k\z\x\i\i\c\m\b\x\t\t\k\3\a\5\i\h\9\h\4\s\9\m\c\j\d\a\p\g\u\6\a\i\x\t\5\q\u\c\v\7\z\b\h\t\l\j\7\r\g\7\3\i\l\m\w\h\z\x\m\z\l\w\9\h\q\2\y\2\v\2\0\f\4\d\t\l\e\y\c\v\g\2\4\a\e\u\v\r\e\z\z\y\l\d\s\u\y\p\3\6\4\1\9\0\i\w\4\0\l\q\x\w\b\y\u\j\1\k\7\y\4\f\g\y\m\b\h\m\4\4\h\0\a\h\e\2\x\c\m\v ]] 00:08:19.371 10:13:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.371 10:13:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:19.371 [2024-07-26 10:13:32.823394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:19.371 [2024-07-26 10:13:32.823520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69931 ] 00:08:19.629 [2024-07-26 10:13:32.961212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.629 [2024-07-26 10:13:33.065598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.146  Copying: 512/512 [B] (average 100 kBps) 00:08:20.146 00:08:20.146 10:13:33 -- dd/posix.sh@93 -- # [[ 1ngdhjouf1tcmn2vkt6nmxnkhk43of3jmrid9zavob9vjsl9aoukpza12k7ucf3ypjsqpz6z6o0jl22pkimzn44hylwozl76si8u8q2gijlzilzbkguzczw3rb5nmlp1mpbxux6mx0ztsk0m4k68zg3x7y3lxy6fo26lm1rxsj9gtmjor05wsv5ojkbtalmj261nzjnegqgulpsij4cgpwggj7fkbzaqfc8hvdyrem9hdcpnr9um8sfv38r4o3g26hkscs85x6vtv31znkmhsutwv2pfhaos5oaqluj44f41ngai8vd9vkr5w0kgd8ui4vskge515i408ad1dr2qteiahb3q9kg0d3q93jrauk23bm0ikdrkzxiicmbxttk3a5ih9h4s9mcjdapgu6aixt5qucv7zbhtlj7rg73ilmwhzxmzlw9hq2y2v20f4dtleycvg24aeuvrezzyldsuyp364190iw40lqxwbyuj1k7y4fgymbhm44h0ahe2xcmv == \1\n\g\d\h\j\o\u\f\1\t\c\m\n\2\v\k\t\6\n\m\x\n\k\h\k\4\3\o\f\3\j\m\r\i\d\9\z\a\v\o\b\9\v\j\s\l\9\a\o\u\k\p\z\a\1\2\k\7\u\c\f\3\y\p\j\s\q\p\z\6\z\6\o\0\j\l\2\2\p\k\i\m\z\n\4\4\h\y\l\w\o\z\l\7\6\s\i\8\u\8\q\2\g\i\j\l\z\i\l\z\b\k\g\u\z\c\z\w\3\r\b\5\n\m\l\p\1\m\p\b\x\u\x\6\m\x\0\z\t\s\k\0\m\4\k\6\8\z\g\3\x\7\y\3\l\x\y\6\f\o\2\6\l\m\1\r\x\s\j\9\g\t\m\j\o\r\0\5\w\s\v\5\o\j\k\b\t\a\l\m\j\2\6\1\n\z\j\n\e\g\q\g\u\l\p\s\i\j\4\c\g\p\w\g\g\j\7\f\k\b\z\a\q\f\c\8\h\v\d\y\r\e\m\9\h\d\c\p\n\r\9\u\m\8\s\f\v\3\8\r\4\o\3\g\2\6\h\k\s\c\s\8\5\x\6\v\t\v\3\1\z\n\k\m\h\s\u\t\w\v\2\p\f\h\a\o\s\5\o\a\q\l\u\j\4\4\f\4\1\n\g\a\i\8\v\d\9\v\k\r\5\w\0\k\g\d\8\u\i\4\v\s\k\g\e\5\1\5\i\4\0\8\a\d\1\d\r\2\q\t\e\i\a\h\b\3\q\9\k\g\0\d\3\q\9\3\j\r\a\u\k\2\3\b\m\0\i\k\d\r\k\z\x\i\i\c\m\b\x\t\t\k\3\a\5\i\h\9\h\4\s\9\m\c\j\d\a\p\g\u\6\a\i\x\t\5\q\u\c\v\7\z\b\h\t\l\j\7\r\g\7\3\i\l\m\w\h\z\x\m\z\l\w\9\h\q\2\y\2\v\2\0\f\4\d\t\l\e\y\c\v\g\2\4\a\e\u\v\r\e\z\z\y\l\d\s\u\y\p\3\6\4\1\9\0\i\w\4\0\l\q\x\w\b\y\u\j\1\k\7\y\4\f\g\y\m\b\h\m\4\4\h\0\a\h\e\2\x\c\m\v ]] 00:08:20.146 10:13:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.146 10:13:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:20.146 [2024-07-26 10:13:33.467588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:20.146 [2024-07-26 10:13:33.467686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69943 ] 00:08:20.404 [2024-07-26 10:13:33.606724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.404 [2024-07-26 10:13:33.735098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.663  Copying: 512/512 [B] (average 250 kBps) 00:08:20.663 00:08:20.663 10:13:34 -- dd/posix.sh@93 -- # [[ 1ngdhjouf1tcmn2vkt6nmxnkhk43of3jmrid9zavob9vjsl9aoukpza12k7ucf3ypjsqpz6z6o0jl22pkimzn44hylwozl76si8u8q2gijlzilzbkguzczw3rb5nmlp1mpbxux6mx0ztsk0m4k68zg3x7y3lxy6fo26lm1rxsj9gtmjor05wsv5ojkbtalmj261nzjnegqgulpsij4cgpwggj7fkbzaqfc8hvdyrem9hdcpnr9um8sfv38r4o3g26hkscs85x6vtv31znkmhsutwv2pfhaos5oaqluj44f41ngai8vd9vkr5w0kgd8ui4vskge515i408ad1dr2qteiahb3q9kg0d3q93jrauk23bm0ikdrkzxiicmbxttk3a5ih9h4s9mcjdapgu6aixt5qucv7zbhtlj7rg73ilmwhzxmzlw9hq2y2v20f4dtleycvg24aeuvrezzyldsuyp364190iw40lqxwbyuj1k7y4fgymbhm44h0ahe2xcmv == \1\n\g\d\h\j\o\u\f\1\t\c\m\n\2\v\k\t\6\n\m\x\n\k\h\k\4\3\o\f\3\j\m\r\i\d\9\z\a\v\o\b\9\v\j\s\l\9\a\o\u\k\p\z\a\1\2\k\7\u\c\f\3\y\p\j\s\q\p\z\6\z\6\o\0\j\l\2\2\p\k\i\m\z\n\4\4\h\y\l\w\o\z\l\7\6\s\i\8\u\8\q\2\g\i\j\l\z\i\l\z\b\k\g\u\z\c\z\w\3\r\b\5\n\m\l\p\1\m\p\b\x\u\x\6\m\x\0\z\t\s\k\0\m\4\k\6\8\z\g\3\x\7\y\3\l\x\y\6\f\o\2\6\l\m\1\r\x\s\j\9\g\t\m\j\o\r\0\5\w\s\v\5\o\j\k\b\t\a\l\m\j\2\6\1\n\z\j\n\e\g\q\g\u\l\p\s\i\j\4\c\g\p\w\g\g\j\7\f\k\b\z\a\q\f\c\8\h\v\d\y\r\e\m\9\h\d\c\p\n\r\9\u\m\8\s\f\v\3\8\r\4\o\3\g\2\6\h\k\s\c\s\8\5\x\6\v\t\v\3\1\z\n\k\m\h\s\u\t\w\v\2\p\f\h\a\o\s\5\o\a\q\l\u\j\4\4\f\4\1\n\g\a\i\8\v\d\9\v\k\r\5\w\0\k\g\d\8\u\i\4\v\s\k\g\e\5\1\5\i\4\0\8\a\d\1\d\r\2\q\t\e\i\a\h\b\3\q\9\k\g\0\d\3\q\9\3\j\r\a\u\k\2\3\b\m\0\i\k\d\r\k\z\x\i\i\c\m\b\x\t\t\k\3\a\5\i\h\9\h\4\s\9\m\c\j\d\a\p\g\u\6\a\i\x\t\5\q\u\c\v\7\z\b\h\t\l\j\7\r\g\7\3\i\l\m\w\h\z\x\m\z\l\w\9\h\q\2\y\2\v\2\0\f\4\d\t\l\e\y\c\v\g\2\4\a\e\u\v\r\e\z\z\y\l\d\s\u\y\p\3\6\4\1\9\0\i\w\4\0\l\q\x\w\b\y\u\j\1\k\7\y\4\f\g\y\m\b\h\m\4\4\h\0\a\h\e\2\x\c\m\v ]] 00:08:20.663 10:13:34 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:20.663 10:13:34 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:20.663 10:13:34 -- dd/common.sh@98 -- # xtrace_disable 00:08:20.663 10:13:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.921 10:13:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.921 10:13:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:20.921 [2024-07-26 10:13:34.172944] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:20.921 [2024-07-26 10:13:34.173172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69952 ] 00:08:20.921 [2024-07-26 10:13:34.317506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.179 [2024-07-26 10:13:34.425044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.437  Copying: 512/512 [B] (average 500 kBps) 00:08:21.437 00:08:21.438 10:13:34 -- dd/posix.sh@93 -- # [[ jb5qi5xxmabtzcw1h3xmpbyyxdh7e0xoeijg7fgj90ybibi2460zmq1txzxacddh0pcwxctljkntj9hjlq0b4h5vfsb0up8z3pmkbyfeap79jpwpa52l4skypcc6lv5onttb2q6ynf455muwesbb0t9zkpnfv9521mtaramgmxumfklu68nifmxdmfhcnhf93gixlsbvlh63x1m97woyj53zddeyx60fz8w0kplyrkxkqexywfmuvcqn0u0m2w3mzyf1dox0ydqpz87yrtiv4lxx0jmcdyug1xe6zpqdvev5clelt3jh2wkd2m5j9qor96optctdd10jc9pm5nt75q92n9p7mddxum7cgb2aie3h2ujceg61dlrqh9cm5ou592c9w59iylf1tjymglv3dt70cm8xk3onkg4jjfl4lu33r0otst0d4146tdqwppmez5bg9s1aqq19ms2c92jdem8x9hqjzyz3hd1m9qed0jrum7kfpmxe1lkrb896zx5v == \j\b\5\q\i\5\x\x\m\a\b\t\z\c\w\1\h\3\x\m\p\b\y\y\x\d\h\7\e\0\x\o\e\i\j\g\7\f\g\j\9\0\y\b\i\b\i\2\4\6\0\z\m\q\1\t\x\z\x\a\c\d\d\h\0\p\c\w\x\c\t\l\j\k\n\t\j\9\h\j\l\q\0\b\4\h\5\v\f\s\b\0\u\p\8\z\3\p\m\k\b\y\f\e\a\p\7\9\j\p\w\p\a\5\2\l\4\s\k\y\p\c\c\6\l\v\5\o\n\t\t\b\2\q\6\y\n\f\4\5\5\m\u\w\e\s\b\b\0\t\9\z\k\p\n\f\v\9\5\2\1\m\t\a\r\a\m\g\m\x\u\m\f\k\l\u\6\8\n\i\f\m\x\d\m\f\h\c\n\h\f\9\3\g\i\x\l\s\b\v\l\h\6\3\x\1\m\9\7\w\o\y\j\5\3\z\d\d\e\y\x\6\0\f\z\8\w\0\k\p\l\y\r\k\x\k\q\e\x\y\w\f\m\u\v\c\q\n\0\u\0\m\2\w\3\m\z\y\f\1\d\o\x\0\y\d\q\p\z\8\7\y\r\t\i\v\4\l\x\x\0\j\m\c\d\y\u\g\1\x\e\6\z\p\q\d\v\e\v\5\c\l\e\l\t\3\j\h\2\w\k\d\2\m\5\j\9\q\o\r\9\6\o\p\t\c\t\d\d\1\0\j\c\9\p\m\5\n\t\7\5\q\9\2\n\9\p\7\m\d\d\x\u\m\7\c\g\b\2\a\i\e\3\h\2\u\j\c\e\g\6\1\d\l\r\q\h\9\c\m\5\o\u\5\9\2\c\9\w\5\9\i\y\l\f\1\t\j\y\m\g\l\v\3\d\t\7\0\c\m\8\x\k\3\o\n\k\g\4\j\j\f\l\4\l\u\3\3\r\0\o\t\s\t\0\d\4\1\4\6\t\d\q\w\p\p\m\e\z\5\b\g\9\s\1\a\q\q\1\9\m\s\2\c\9\2\j\d\e\m\8\x\9\h\q\j\z\y\z\3\h\d\1\m\9\q\e\d\0\j\r\u\m\7\k\f\p\m\x\e\1\l\k\r\b\8\9\6\z\x\5\v ]] 00:08:21.438 10:13:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.438 10:13:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:21.438 [2024-07-26 10:13:34.810039] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:21.438 [2024-07-26 10:13:34.810174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69959 ] 00:08:21.696 [2024-07-26 10:13:34.945620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.696 [2024-07-26 10:13:35.064920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.955  Copying: 512/512 [B] (average 500 kBps) 00:08:21.955 00:08:21.955 10:13:35 -- dd/posix.sh@93 -- # [[ jb5qi5xxmabtzcw1h3xmpbyyxdh7e0xoeijg7fgj90ybibi2460zmq1txzxacddh0pcwxctljkntj9hjlq0b4h5vfsb0up8z3pmkbyfeap79jpwpa52l4skypcc6lv5onttb2q6ynf455muwesbb0t9zkpnfv9521mtaramgmxumfklu68nifmxdmfhcnhf93gixlsbvlh63x1m97woyj53zddeyx60fz8w0kplyrkxkqexywfmuvcqn0u0m2w3mzyf1dox0ydqpz87yrtiv4lxx0jmcdyug1xe6zpqdvev5clelt3jh2wkd2m5j9qor96optctdd10jc9pm5nt75q92n9p7mddxum7cgb2aie3h2ujceg61dlrqh9cm5ou592c9w59iylf1tjymglv3dt70cm8xk3onkg4jjfl4lu33r0otst0d4146tdqwppmez5bg9s1aqq19ms2c92jdem8x9hqjzyz3hd1m9qed0jrum7kfpmxe1lkrb896zx5v == \j\b\5\q\i\5\x\x\m\a\b\t\z\c\w\1\h\3\x\m\p\b\y\y\x\d\h\7\e\0\x\o\e\i\j\g\7\f\g\j\9\0\y\b\i\b\i\2\4\6\0\z\m\q\1\t\x\z\x\a\c\d\d\h\0\p\c\w\x\c\t\l\j\k\n\t\j\9\h\j\l\q\0\b\4\h\5\v\f\s\b\0\u\p\8\z\3\p\m\k\b\y\f\e\a\p\7\9\j\p\w\p\a\5\2\l\4\s\k\y\p\c\c\6\l\v\5\o\n\t\t\b\2\q\6\y\n\f\4\5\5\m\u\w\e\s\b\b\0\t\9\z\k\p\n\f\v\9\5\2\1\m\t\a\r\a\m\g\m\x\u\m\f\k\l\u\6\8\n\i\f\m\x\d\m\f\h\c\n\h\f\9\3\g\i\x\l\s\b\v\l\h\6\3\x\1\m\9\7\w\o\y\j\5\3\z\d\d\e\y\x\6\0\f\z\8\w\0\k\p\l\y\r\k\x\k\q\e\x\y\w\f\m\u\v\c\q\n\0\u\0\m\2\w\3\m\z\y\f\1\d\o\x\0\y\d\q\p\z\8\7\y\r\t\i\v\4\l\x\x\0\j\m\c\d\y\u\g\1\x\e\6\z\p\q\d\v\e\v\5\c\l\e\l\t\3\j\h\2\w\k\d\2\m\5\j\9\q\o\r\9\6\o\p\t\c\t\d\d\1\0\j\c\9\p\m\5\n\t\7\5\q\9\2\n\9\p\7\m\d\d\x\u\m\7\c\g\b\2\a\i\e\3\h\2\u\j\c\e\g\6\1\d\l\r\q\h\9\c\m\5\o\u\5\9\2\c\9\w\5\9\i\y\l\f\1\t\j\y\m\g\l\v\3\d\t\7\0\c\m\8\x\k\3\o\n\k\g\4\j\j\f\l\4\l\u\3\3\r\0\o\t\s\t\0\d\4\1\4\6\t\d\q\w\p\p\m\e\z\5\b\g\9\s\1\a\q\q\1\9\m\s\2\c\9\2\j\d\e\m\8\x\9\h\q\j\z\y\z\3\h\d\1\m\9\q\e\d\0\j\r\u\m\7\k\f\p\m\x\e\1\l\k\r\b\8\9\6\z\x\5\v ]] 00:08:21.955 10:13:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.955 10:13:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:22.213 [2024-07-26 10:13:35.458764] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:22.213 [2024-07-26 10:13:35.458892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:08:22.213 [2024-07-26 10:13:35.597822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.472 [2024-07-26 10:13:35.709966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.730  Copying: 512/512 [B] (average 166 kBps) 00:08:22.730 00:08:22.730 10:13:36 -- dd/posix.sh@93 -- # [[ jb5qi5xxmabtzcw1h3xmpbyyxdh7e0xoeijg7fgj90ybibi2460zmq1txzxacddh0pcwxctljkntj9hjlq0b4h5vfsb0up8z3pmkbyfeap79jpwpa52l4skypcc6lv5onttb2q6ynf455muwesbb0t9zkpnfv9521mtaramgmxumfklu68nifmxdmfhcnhf93gixlsbvlh63x1m97woyj53zddeyx60fz8w0kplyrkxkqexywfmuvcqn0u0m2w3mzyf1dox0ydqpz87yrtiv4lxx0jmcdyug1xe6zpqdvev5clelt3jh2wkd2m5j9qor96optctdd10jc9pm5nt75q92n9p7mddxum7cgb2aie3h2ujceg61dlrqh9cm5ou592c9w59iylf1tjymglv3dt70cm8xk3onkg4jjfl4lu33r0otst0d4146tdqwppmez5bg9s1aqq19ms2c92jdem8x9hqjzyz3hd1m9qed0jrum7kfpmxe1lkrb896zx5v == \j\b\5\q\i\5\x\x\m\a\b\t\z\c\w\1\h\3\x\m\p\b\y\y\x\d\h\7\e\0\x\o\e\i\j\g\7\f\g\j\9\0\y\b\i\b\i\2\4\6\0\z\m\q\1\t\x\z\x\a\c\d\d\h\0\p\c\w\x\c\t\l\j\k\n\t\j\9\h\j\l\q\0\b\4\h\5\v\f\s\b\0\u\p\8\z\3\p\m\k\b\y\f\e\a\p\7\9\j\p\w\p\a\5\2\l\4\s\k\y\p\c\c\6\l\v\5\o\n\t\t\b\2\q\6\y\n\f\4\5\5\m\u\w\e\s\b\b\0\t\9\z\k\p\n\f\v\9\5\2\1\m\t\a\r\a\m\g\m\x\u\m\f\k\l\u\6\8\n\i\f\m\x\d\m\f\h\c\n\h\f\9\3\g\i\x\l\s\b\v\l\h\6\3\x\1\m\9\7\w\o\y\j\5\3\z\d\d\e\y\x\6\0\f\z\8\w\0\k\p\l\y\r\k\x\k\q\e\x\y\w\f\m\u\v\c\q\n\0\u\0\m\2\w\3\m\z\y\f\1\d\o\x\0\y\d\q\p\z\8\7\y\r\t\i\v\4\l\x\x\0\j\m\c\d\y\u\g\1\x\e\6\z\p\q\d\v\e\v\5\c\l\e\l\t\3\j\h\2\w\k\d\2\m\5\j\9\q\o\r\9\6\o\p\t\c\t\d\d\1\0\j\c\9\p\m\5\n\t\7\5\q\9\2\n\9\p\7\m\d\d\x\u\m\7\c\g\b\2\a\i\e\3\h\2\u\j\c\e\g\6\1\d\l\r\q\h\9\c\m\5\o\u\5\9\2\c\9\w\5\9\i\y\l\f\1\t\j\y\m\g\l\v\3\d\t\7\0\c\m\8\x\k\3\o\n\k\g\4\j\j\f\l\4\l\u\3\3\r\0\o\t\s\t\0\d\4\1\4\6\t\d\q\w\p\p\m\e\z\5\b\g\9\s\1\a\q\q\1\9\m\s\2\c\9\2\j\d\e\m\8\x\9\h\q\j\z\y\z\3\h\d\1\m\9\q\e\d\0\j\r\u\m\7\k\f\p\m\x\e\1\l\k\r\b\8\9\6\z\x\5\v ]] 00:08:22.730 10:13:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.730 10:13:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:22.730 [2024-07-26 10:13:36.093150] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:22.730 [2024-07-26 10:13:36.093269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69979 ] 00:08:22.990 [2024-07-26 10:13:36.224726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.990 [2024-07-26 10:13:36.330165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.249  Copying: 512/512 [B] (average 250 kBps) 00:08:23.249 00:08:23.249 10:13:36 -- dd/posix.sh@93 -- # [[ jb5qi5xxmabtzcw1h3xmpbyyxdh7e0xoeijg7fgj90ybibi2460zmq1txzxacddh0pcwxctljkntj9hjlq0b4h5vfsb0up8z3pmkbyfeap79jpwpa52l4skypcc6lv5onttb2q6ynf455muwesbb0t9zkpnfv9521mtaramgmxumfklu68nifmxdmfhcnhf93gixlsbvlh63x1m97woyj53zddeyx60fz8w0kplyrkxkqexywfmuvcqn0u0m2w3mzyf1dox0ydqpz87yrtiv4lxx0jmcdyug1xe6zpqdvev5clelt3jh2wkd2m5j9qor96optctdd10jc9pm5nt75q92n9p7mddxum7cgb2aie3h2ujceg61dlrqh9cm5ou592c9w59iylf1tjymglv3dt70cm8xk3onkg4jjfl4lu33r0otst0d4146tdqwppmez5bg9s1aqq19ms2c92jdem8x9hqjzyz3hd1m9qed0jrum7kfpmxe1lkrb896zx5v == \j\b\5\q\i\5\x\x\m\a\b\t\z\c\w\1\h\3\x\m\p\b\y\y\x\d\h\7\e\0\x\o\e\i\j\g\7\f\g\j\9\0\y\b\i\b\i\2\4\6\0\z\m\q\1\t\x\z\x\a\c\d\d\h\0\p\c\w\x\c\t\l\j\k\n\t\j\9\h\j\l\q\0\b\4\h\5\v\f\s\b\0\u\p\8\z\3\p\m\k\b\y\f\e\a\p\7\9\j\p\w\p\a\5\2\l\4\s\k\y\p\c\c\6\l\v\5\o\n\t\t\b\2\q\6\y\n\f\4\5\5\m\u\w\e\s\b\b\0\t\9\z\k\p\n\f\v\9\5\2\1\m\t\a\r\a\m\g\m\x\u\m\f\k\l\u\6\8\n\i\f\m\x\d\m\f\h\c\n\h\f\9\3\g\i\x\l\s\b\v\l\h\6\3\x\1\m\9\7\w\o\y\j\5\3\z\d\d\e\y\x\6\0\f\z\8\w\0\k\p\l\y\r\k\x\k\q\e\x\y\w\f\m\u\v\c\q\n\0\u\0\m\2\w\3\m\z\y\f\1\d\o\x\0\y\d\q\p\z\8\7\y\r\t\i\v\4\l\x\x\0\j\m\c\d\y\u\g\1\x\e\6\z\p\q\d\v\e\v\5\c\l\e\l\t\3\j\h\2\w\k\d\2\m\5\j\9\q\o\r\9\6\o\p\t\c\t\d\d\1\0\j\c\9\p\m\5\n\t\7\5\q\9\2\n\9\p\7\m\d\d\x\u\m\7\c\g\b\2\a\i\e\3\h\2\u\j\c\e\g\6\1\d\l\r\q\h\9\c\m\5\o\u\5\9\2\c\9\w\5\9\i\y\l\f\1\t\j\y\m\g\l\v\3\d\t\7\0\c\m\8\x\k\3\o\n\k\g\4\j\j\f\l\4\l\u\3\3\r\0\o\t\s\t\0\d\4\1\4\6\t\d\q\w\p\p\m\e\z\5\b\g\9\s\1\a\q\q\1\9\m\s\2\c\9\2\j\d\e\m\8\x\9\h\q\j\z\y\z\3\h\d\1\m\9\q\e\d\0\j\r\u\m\7\k\f\p\m\x\e\1\l\k\r\b\8\9\6\z\x\5\v ]] 00:08:23.249 00:08:23.249 real 0m5.143s 00:08:23.249 user 0m2.916s 00:08:23.249 sys 0m1.229s 00:08:23.249 10:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.249 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.249 ************************************ 00:08:23.249 END TEST dd_flags_misc 00:08:23.249 ************************************ 00:08:23.249 10:13:36 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:23.249 10:13:36 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:23.249 * Second test run, disabling liburing, forcing AIO 00:08:23.249 10:13:36 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:23.249 10:13:36 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:23.250 10:13:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.250 10:13:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.250 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.250 ************************************ 00:08:23.250 START TEST dd_flag_append_forced_aio 00:08:23.250 ************************************ 00:08:23.250 10:13:36 -- common/autotest_common.sh@1104 -- # append 00:08:23.250 10:13:36 -- dd/posix.sh@16 -- # local dump0 00:08:23.250 10:13:36 -- dd/posix.sh@17 -- # local dump1 00:08:23.250 10:13:36 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:23.250 10:13:36 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.250 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.250 10:13:36 -- dd/posix.sh@19 -- # dump0=elmh3rgw1qdzfjflilwvqvkqbd01qfwz 00:08:23.250 10:13:36 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:23.250 10:13:36 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.250 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 10:13:36 -- dd/posix.sh@20 -- # dump1=k86t23hm3reepkn5o9f82mi9jhfotvv4 00:08:23.508 10:13:36 -- dd/posix.sh@22 -- # printf %s elmh3rgw1qdzfjflilwvqvkqbd01qfwz 00:08:23.508 10:13:36 -- dd/posix.sh@23 -- # printf %s k86t23hm3reepkn5o9f82mi9jhfotvv4 00:08:23.508 10:13:36 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:23.508 [2024-07-26 10:13:36.755606] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:23.508 [2024-07-26 10:13:36.755730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70006 ] 00:08:23.508 [2024-07-26 10:13:36.894788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.766 [2024-07-26 10:13:37.000020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.035  Copying: 32/32 [B] (average 31 kBps) 00:08:24.035 00:08:24.035 10:13:37 -- dd/posix.sh@27 -- # [[ k86t23hm3reepkn5o9f82mi9jhfotvv4elmh3rgw1qdzfjflilwvqvkqbd01qfwz == \k\8\6\t\2\3\h\m\3\r\e\e\p\k\n\5\o\9\f\8\2\m\i\9\j\h\f\o\t\v\v\4\e\l\m\h\3\r\g\w\1\q\d\z\f\j\f\l\i\l\w\v\q\v\k\q\b\d\0\1\q\f\w\z ]] 00:08:24.035 00:08:24.035 real 0m0.622s 00:08:24.035 user 0m0.343s 00:08:24.035 sys 0m0.156s 00:08:24.035 10:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.035 ************************************ 00:08:24.035 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:08:24.035 END TEST dd_flag_append_forced_aio 00:08:24.035 ************************************ 00:08:24.035 10:13:37 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:24.035 10:13:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.035 10:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.035 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:08:24.035 ************************************ 00:08:24.035 START TEST dd_flag_directory_forced_aio 00:08:24.035 ************************************ 00:08:24.035 10:13:37 -- common/autotest_common.sh@1104 -- # directory 00:08:24.035 10:13:37 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.035 10:13:37 -- common/autotest_common.sh@640 -- # local es=0 00:08:24.035 10:13:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.035 10:13:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.035 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.035 10:13:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.035 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.035 10:13:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.035 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.035 10:13:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.035 10:13:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.035 10:13:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.035 [2024-07-26 10:13:37.426671] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:24.036 [2024-07-26 10:13:37.426814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70033 ] 00:08:24.294 [2024-07-26 10:13:37.566653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.294 [2024-07-26 10:13:37.677245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.553 [2024-07-26 10:13:37.767359] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:24.553 [2024-07-26 10:13:37.767419] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:24.553 [2024-07-26 10:13:37.767442] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.553 [2024-07-26 10:13:37.883397] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:24.554 10:13:37 -- common/autotest_common.sh@643 -- # es=236 00:08:24.554 10:13:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:24.554 10:13:37 -- common/autotest_common.sh@652 -- # es=108 00:08:24.554 10:13:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:24.554 10:13:37 -- common/autotest_common.sh@660 -- # es=1 00:08:24.554 10:13:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:24.554 10:13:37 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:24.554 10:13:37 -- common/autotest_common.sh@640 -- # local es=0 00:08:24.554 10:13:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:24.554 10:13:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.554 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.554 10:13:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.554 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.554 10:13:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.554 10:13:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.554 10:13:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.554 10:13:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.554 10:13:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:24.813 [2024-07-26 10:13:38.032635] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:24.813 [2024-07-26 10:13:38.032748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70042 ] 00:08:24.813 [2024-07-26 10:13:38.171386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.813 [2024-07-26 10:13:38.266844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.072 [2024-07-26 10:13:38.352207] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:25.072 [2024-07-26 10:13:38.352259] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:25.072 [2024-07-26 10:13:38.352275] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.072 [2024-07-26 10:13:38.464653] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:25.331 10:13:38 -- common/autotest_common.sh@643 -- # es=236 00:08:25.331 10:13:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:25.331 10:13:38 -- common/autotest_common.sh@652 -- # es=108 00:08:25.331 10:13:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:25.331 10:13:38 -- common/autotest_common.sh@660 -- # es=1 00:08:25.331 10:13:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:25.331 00:08:25.331 real 0m1.186s 00:08:25.331 user 0m0.665s 00:08:25.331 sys 0m0.310s 00:08:25.331 10:13:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.331 10:13:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.331 ************************************ 00:08:25.331 END TEST dd_flag_directory_forced_aio 00:08:25.331 ************************************ 00:08:25.331 10:13:38 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:25.331 10:13:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.331 10:13:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.331 10:13:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.331 ************************************ 00:08:25.331 START TEST dd_flag_nofollow_forced_aio 00:08:25.331 ************************************ 00:08:25.331 10:13:38 -- common/autotest_common.sh@1104 -- # nofollow 00:08:25.331 10:13:38 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:25.331 10:13:38 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:25.331 10:13:38 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:25.331 10:13:38 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:25.331 10:13:38 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.331 10:13:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:25.331 10:13:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.331 10:13:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.331 10:13:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.331 10:13:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.331 10:13:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.331 10:13:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.331 10:13:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.331 10:13:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.331 10:13:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.331 10:13:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.331 [2024-07-26 10:13:38.665881] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:25.331 [2024-07-26 10:13:38.665967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70071 ] 00:08:25.590 [2024-07-26 10:13:38.798705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.590 [2024-07-26 10:13:38.895882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.590 [2024-07-26 10:13:38.982935] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:25.590 [2024-07-26 10:13:38.983010] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:25.590 [2024-07-26 10:13:38.983026] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.850 [2024-07-26 10:13:39.095587] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:25.850 10:13:39 -- common/autotest_common.sh@643 -- # es=216 00:08:25.850 10:13:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:25.850 10:13:39 -- common/autotest_common.sh@652 -- # es=88 00:08:25.850 10:13:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:25.850 10:13:39 -- common/autotest_common.sh@660 -- # es=1 00:08:25.850 10:13:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:25.850 10:13:39 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:25.850 10:13:39 -- common/autotest_common.sh@640 -- # local es=0 00:08:25.850 10:13:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:25.850 10:13:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.850 10:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.850 10:13:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.850 10:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.850 10:13:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.850 10:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:25.850 10:13:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.850 10:13:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.850 10:13:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:25.850 [2024-07-26 10:13:39.235527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:25.850 [2024-07-26 10:13:39.235654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70086 ] 00:08:26.109 [2024-07-26 10:13:39.375824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.109 [2024-07-26 10:13:39.483823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.368 [2024-07-26 10:13:39.575607] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:26.368 [2024-07-26 10:13:39.575693] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:26.368 [2024-07-26 10:13:39.575709] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.368 [2024-07-26 10:13:39.689690] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.368 10:13:39 -- common/autotest_common.sh@643 -- # es=216 00:08:26.368 10:13:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:26.368 10:13:39 -- common/autotest_common.sh@652 -- # es=88 00:08:26.368 10:13:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:26.368 10:13:39 -- common/autotest_common.sh@660 -- # es=1 00:08:26.368 10:13:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:26.368 10:13:39 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:26.368 10:13:39 -- dd/common.sh@98 -- # xtrace_disable 00:08:26.368 10:13:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.368 10:13:39 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.626 [2024-07-26 10:13:39.844246] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:26.626 [2024-07-26 10:13:39.844353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70092 ] 00:08:26.626 [2024-07-26 10:13:39.980802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.885 [2024-07-26 10:13:40.084620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.145  Copying: 512/512 [B] (average 500 kBps) 00:08:27.145 00:08:27.145 10:13:40 -- dd/posix.sh@49 -- # [[ ylw5kselmcgrjuym64kb79vau9ks3y9gbat71s8rjjj2tnsgwe9c1yk1n2z6grppnke53rowuaipqfhu6sua0vdn1unfzb5chlmb889hv9lkr3krn9bw42rw0owdypnyyc4yhkmttkrinsr2fgrzkyvrk7z482rx9cx2jboxsa4onuun1u2zmr68mvc1it0z33te0dea4xmhwraoizf3yqxjujo7surphb3mt38imbvtpekjyt69bzon23plozbtmtu3mejav63ez8a9xivcb9bx36e0t39oyn3rr6pe4n6cpy0ae0nlejcbyz5ibjw0fufwpbtxei5rdhzu6muac3lgl875m5ptoowohegkn7lacst2j3m68n6pzsarlcrnba9z3zep0k416ce3b2uxrnu0lv5zxgm16cg7ytjkta5i2vl8ymet5fv7wqfy1isr2b10jutjtnwqg7mi62h4snii6j8omp5it0x6ss5ism6twqpmy0kuw7wy0ynv8xhp == \y\l\w\5\k\s\e\l\m\c\g\r\j\u\y\m\6\4\k\b\7\9\v\a\u\9\k\s\3\y\9\g\b\a\t\7\1\s\8\r\j\j\j\2\t\n\s\g\w\e\9\c\1\y\k\1\n\2\z\6\g\r\p\p\n\k\e\5\3\r\o\w\u\a\i\p\q\f\h\u\6\s\u\a\0\v\d\n\1\u\n\f\z\b\5\c\h\l\m\b\8\8\9\h\v\9\l\k\r\3\k\r\n\9\b\w\4\2\r\w\0\o\w\d\y\p\n\y\y\c\4\y\h\k\m\t\t\k\r\i\n\s\r\2\f\g\r\z\k\y\v\r\k\7\z\4\8\2\r\x\9\c\x\2\j\b\o\x\s\a\4\o\n\u\u\n\1\u\2\z\m\r\6\8\m\v\c\1\i\t\0\z\3\3\t\e\0\d\e\a\4\x\m\h\w\r\a\o\i\z\f\3\y\q\x\j\u\j\o\7\s\u\r\p\h\b\3\m\t\3\8\i\m\b\v\t\p\e\k\j\y\t\6\9\b\z\o\n\2\3\p\l\o\z\b\t\m\t\u\3\m\e\j\a\v\6\3\e\z\8\a\9\x\i\v\c\b\9\b\x\3\6\e\0\t\3\9\o\y\n\3\r\r\6\p\e\4\n\6\c\p\y\0\a\e\0\n\l\e\j\c\b\y\z\5\i\b\j\w\0\f\u\f\w\p\b\t\x\e\i\5\r\d\h\z\u\6\m\u\a\c\3\l\g\l\8\7\5\m\5\p\t\o\o\w\o\h\e\g\k\n\7\l\a\c\s\t\2\j\3\m\6\8\n\6\p\z\s\a\r\l\c\r\n\b\a\9\z\3\z\e\p\0\k\4\1\6\c\e\3\b\2\u\x\r\n\u\0\l\v\5\z\x\g\m\1\6\c\g\7\y\t\j\k\t\a\5\i\2\v\l\8\y\m\e\t\5\f\v\7\w\q\f\y\1\i\s\r\2\b\1\0\j\u\t\j\t\n\w\q\g\7\m\i\6\2\h\4\s\n\i\i\6\j\8\o\m\p\5\i\t\0\x\6\s\s\5\i\s\m\6\t\w\q\p\m\y\0\k\u\w\7\w\y\0\y\n\v\8\x\h\p ]] 00:08:27.145 00:08:27.145 real 0m1.803s 00:08:27.145 user 0m1.012s 00:08:27.145 sys 0m0.458s 00:08:27.145 10:13:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.145 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 ************************************ 00:08:27.145 END TEST dd_flag_nofollow_forced_aio 00:08:27.145 ************************************ 00:08:27.145 10:13:40 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:27.145 10:13:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:27.145 10:13:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.145 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 ************************************ 00:08:27.145 START TEST dd_flag_noatime_forced_aio 00:08:27.145 ************************************ 00:08:27.145 10:13:40 -- common/autotest_common.sh@1104 -- # noatime 00:08:27.145 10:13:40 -- dd/posix.sh@53 -- # local atime_if 00:08:27.145 10:13:40 -- dd/posix.sh@54 -- # local atime_of 00:08:27.145 10:13:40 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:27.145 10:13:40 -- dd/common.sh@98 -- # xtrace_disable 00:08:27.145 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 10:13:40 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.145 10:13:40 -- dd/posix.sh@60 -- # atime_if=1721988820 00:08:27.145 10:13:40 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.145 10:13:40 -- dd/posix.sh@61 -- # atime_of=1721988820 00:08:27.145 10:13:40 -- dd/posix.sh@66 -- # sleep 1 00:08:28.082 10:13:41 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.082 [2024-07-26 10:13:41.533843] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:28.082 [2024-07-26 10:13:41.533986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70134 ] 00:08:28.341 [2024-07-26 10:13:41.674655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.341 [2024-07-26 10:13:41.772816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.860  Copying: 512/512 [B] (average 500 kBps) 00:08:28.860 00:08:28.860 10:13:42 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.860 10:13:42 -- dd/posix.sh@69 -- # (( atime_if == 1721988820 )) 00:08:28.860 10:13:42 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.860 10:13:42 -- dd/posix.sh@70 -- # (( atime_of == 1721988820 )) 00:08:28.860 10:13:42 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.860 [2024-07-26 10:13:42.152741] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:28.860 [2024-07-26 10:13:42.152848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:08:28.860 [2024-07-26 10:13:42.290418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.119 [2024-07-26 10:13:42.386282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.387  Copying: 512/512 [B] (average 500 kBps) 00:08:29.387 00:08:29.387 10:13:42 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.387 10:13:42 -- dd/posix.sh@73 -- # (( atime_if < 1721988822 )) 00:08:29.387 00:08:29.387 real 0m2.253s 00:08:29.387 user 0m0.682s 00:08:29.387 sys 0m0.314s 00:08:29.387 10:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.387 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 ************************************ 00:08:29.387 END TEST dd_flag_noatime_forced_aio 00:08:29.387 ************************************ 00:08:29.387 10:13:42 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:29.387 10:13:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.387 10:13:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.387 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 ************************************ 00:08:29.387 START TEST dd_flags_misc_forced_aio 00:08:29.387 ************************************ 00:08:29.387 10:13:42 -- common/autotest_common.sh@1104 -- # io 00:08:29.387 10:13:42 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:29.387 10:13:42 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:29.387 10:13:42 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:29.387 10:13:42 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:29.387 10:13:42 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:29.387 10:13:42 -- dd/common.sh@98 -- # xtrace_disable 00:08:29.387 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 10:13:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.387 10:13:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:29.387 [2024-07-26 10:13:42.834331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:29.387 [2024-07-26 10:13:42.834469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70172 ] 00:08:29.645 [2024-07-26 10:13:42.978905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.645 [2024-07-26 10:13:43.077850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.163  Copying: 512/512 [B] (average 500 kBps) 00:08:30.163 00:08:30.163 10:13:43 -- dd/posix.sh@93 -- # [[ w5jbtz5zoz4v9qimk4g82wuwfiovg8zwtyytiq2hv5buuicmoh4stosqyw5xjcsh9dm1xbw7s7c030jc4bp442gtm9c8ids742luh6cdayzptnlb8f2c6jz67obqkohj59vr7lr6lyqycqy5rgwpb43fzq3kav3om2dyn7jqiejchxyou6crwi8jzleuxgq6l62pphwbkuxlrbng7ezh468v29i2oem25ybbfiwghgcjvn2f88f53g26oiju1zuw1h3b6ea6g1vv2rj942pxbvbp8hmu816jk8gktq5dgu9uvcwcus6fs1z71xa81wqes8pfe6jjlf5680h6vwf4tmvjdo4ld5xdwgfr7r84hzmpg7vcsoficpf60fsddlnjer1v20qsng53u7c1g2ple0p8cpbwd8551g54a9rqrrcvzeo0ljpsuk643ffmhl594zombq2isihjsui1jtkcwkftm1jq6t3nkmffncar8a3oqb4whq3z2314l2b1o3o5 == \w\5\j\b\t\z\5\z\o\z\4\v\9\q\i\m\k\4\g\8\2\w\u\w\f\i\o\v\g\8\z\w\t\y\y\t\i\q\2\h\v\5\b\u\u\i\c\m\o\h\4\s\t\o\s\q\y\w\5\x\j\c\s\h\9\d\m\1\x\b\w\7\s\7\c\0\3\0\j\c\4\b\p\4\4\2\g\t\m\9\c\8\i\d\s\7\4\2\l\u\h\6\c\d\a\y\z\p\t\n\l\b\8\f\2\c\6\j\z\6\7\o\b\q\k\o\h\j\5\9\v\r\7\l\r\6\l\y\q\y\c\q\y\5\r\g\w\p\b\4\3\f\z\q\3\k\a\v\3\o\m\2\d\y\n\7\j\q\i\e\j\c\h\x\y\o\u\6\c\r\w\i\8\j\z\l\e\u\x\g\q\6\l\6\2\p\p\h\w\b\k\u\x\l\r\b\n\g\7\e\z\h\4\6\8\v\2\9\i\2\o\e\m\2\5\y\b\b\f\i\w\g\h\g\c\j\v\n\2\f\8\8\f\5\3\g\2\6\o\i\j\u\1\z\u\w\1\h\3\b\6\e\a\6\g\1\v\v\2\r\j\9\4\2\p\x\b\v\b\p\8\h\m\u\8\1\6\j\k\8\g\k\t\q\5\d\g\u\9\u\v\c\w\c\u\s\6\f\s\1\z\7\1\x\a\8\1\w\q\e\s\8\p\f\e\6\j\j\l\f\5\6\8\0\h\6\v\w\f\4\t\m\v\j\d\o\4\l\d\5\x\d\w\g\f\r\7\r\8\4\h\z\m\p\g\7\v\c\s\o\f\i\c\p\f\6\0\f\s\d\d\l\n\j\e\r\1\v\2\0\q\s\n\g\5\3\u\7\c\1\g\2\p\l\e\0\p\8\c\p\b\w\d\8\5\5\1\g\5\4\a\9\r\q\r\r\c\v\z\e\o\0\l\j\p\s\u\k\6\4\3\f\f\m\h\l\5\9\4\z\o\m\b\q\2\i\s\i\h\j\s\u\i\1\j\t\k\c\w\k\f\t\m\1\j\q\6\t\3\n\k\m\f\f\n\c\a\r\8\a\3\o\q\b\4\w\h\q\3\z\2\3\1\4\l\2\b\1\o\3\o\5 ]] 00:08:30.163 10:13:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.163 10:13:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:30.163 [2024-07-26 10:13:43.455036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:30.163 [2024-07-26 10:13:43.455153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70185 ] 00:08:30.163 [2024-07-26 10:13:43.586841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.422 [2024-07-26 10:13:43.687281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.681  Copying: 512/512 [B] (average 500 kBps) 00:08:30.681 00:08:30.681 10:13:44 -- dd/posix.sh@93 -- # [[ w5jbtz5zoz4v9qimk4g82wuwfiovg8zwtyytiq2hv5buuicmoh4stosqyw5xjcsh9dm1xbw7s7c030jc4bp442gtm9c8ids742luh6cdayzptnlb8f2c6jz67obqkohj59vr7lr6lyqycqy5rgwpb43fzq3kav3om2dyn7jqiejchxyou6crwi8jzleuxgq6l62pphwbkuxlrbng7ezh468v29i2oem25ybbfiwghgcjvn2f88f53g26oiju1zuw1h3b6ea6g1vv2rj942pxbvbp8hmu816jk8gktq5dgu9uvcwcus6fs1z71xa81wqes8pfe6jjlf5680h6vwf4tmvjdo4ld5xdwgfr7r84hzmpg7vcsoficpf60fsddlnjer1v20qsng53u7c1g2ple0p8cpbwd8551g54a9rqrrcvzeo0ljpsuk643ffmhl594zombq2isihjsui1jtkcwkftm1jq6t3nkmffncar8a3oqb4whq3z2314l2b1o3o5 == \w\5\j\b\t\z\5\z\o\z\4\v\9\q\i\m\k\4\g\8\2\w\u\w\f\i\o\v\g\8\z\w\t\y\y\t\i\q\2\h\v\5\b\u\u\i\c\m\o\h\4\s\t\o\s\q\y\w\5\x\j\c\s\h\9\d\m\1\x\b\w\7\s\7\c\0\3\0\j\c\4\b\p\4\4\2\g\t\m\9\c\8\i\d\s\7\4\2\l\u\h\6\c\d\a\y\z\p\t\n\l\b\8\f\2\c\6\j\z\6\7\o\b\q\k\o\h\j\5\9\v\r\7\l\r\6\l\y\q\y\c\q\y\5\r\g\w\p\b\4\3\f\z\q\3\k\a\v\3\o\m\2\d\y\n\7\j\q\i\e\j\c\h\x\y\o\u\6\c\r\w\i\8\j\z\l\e\u\x\g\q\6\l\6\2\p\p\h\w\b\k\u\x\l\r\b\n\g\7\e\z\h\4\6\8\v\2\9\i\2\o\e\m\2\5\y\b\b\f\i\w\g\h\g\c\j\v\n\2\f\8\8\f\5\3\g\2\6\o\i\j\u\1\z\u\w\1\h\3\b\6\e\a\6\g\1\v\v\2\r\j\9\4\2\p\x\b\v\b\p\8\h\m\u\8\1\6\j\k\8\g\k\t\q\5\d\g\u\9\u\v\c\w\c\u\s\6\f\s\1\z\7\1\x\a\8\1\w\q\e\s\8\p\f\e\6\j\j\l\f\5\6\8\0\h\6\v\w\f\4\t\m\v\j\d\o\4\l\d\5\x\d\w\g\f\r\7\r\8\4\h\z\m\p\g\7\v\c\s\o\f\i\c\p\f\6\0\f\s\d\d\l\n\j\e\r\1\v\2\0\q\s\n\g\5\3\u\7\c\1\g\2\p\l\e\0\p\8\c\p\b\w\d\8\5\5\1\g\5\4\a\9\r\q\r\r\c\v\z\e\o\0\l\j\p\s\u\k\6\4\3\f\f\m\h\l\5\9\4\z\o\m\b\q\2\i\s\i\h\j\s\u\i\1\j\t\k\c\w\k\f\t\m\1\j\q\6\t\3\n\k\m\f\f\n\c\a\r\8\a\3\o\q\b\4\w\h\q\3\z\2\3\1\4\l\2\b\1\o\3\o\5 ]] 00:08:30.681 10:13:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.681 10:13:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:30.681 [2024-07-26 10:13:44.065853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:30.681 [2024-07-26 10:13:44.065942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70188 ] 00:08:30.940 [2024-07-26 10:13:44.196658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.940 [2024-07-26 10:13:44.300281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.199  Copying: 512/512 [B] (average 100 kBps) 00:08:31.199 00:08:31.199 10:13:44 -- dd/posix.sh@93 -- # [[ w5jbtz5zoz4v9qimk4g82wuwfiovg8zwtyytiq2hv5buuicmoh4stosqyw5xjcsh9dm1xbw7s7c030jc4bp442gtm9c8ids742luh6cdayzptnlb8f2c6jz67obqkohj59vr7lr6lyqycqy5rgwpb43fzq3kav3om2dyn7jqiejchxyou6crwi8jzleuxgq6l62pphwbkuxlrbng7ezh468v29i2oem25ybbfiwghgcjvn2f88f53g26oiju1zuw1h3b6ea6g1vv2rj942pxbvbp8hmu816jk8gktq5dgu9uvcwcus6fs1z71xa81wqes8pfe6jjlf5680h6vwf4tmvjdo4ld5xdwgfr7r84hzmpg7vcsoficpf60fsddlnjer1v20qsng53u7c1g2ple0p8cpbwd8551g54a9rqrrcvzeo0ljpsuk643ffmhl594zombq2isihjsui1jtkcwkftm1jq6t3nkmffncar8a3oqb4whq3z2314l2b1o3o5 == \w\5\j\b\t\z\5\z\o\z\4\v\9\q\i\m\k\4\g\8\2\w\u\w\f\i\o\v\g\8\z\w\t\y\y\t\i\q\2\h\v\5\b\u\u\i\c\m\o\h\4\s\t\o\s\q\y\w\5\x\j\c\s\h\9\d\m\1\x\b\w\7\s\7\c\0\3\0\j\c\4\b\p\4\4\2\g\t\m\9\c\8\i\d\s\7\4\2\l\u\h\6\c\d\a\y\z\p\t\n\l\b\8\f\2\c\6\j\z\6\7\o\b\q\k\o\h\j\5\9\v\r\7\l\r\6\l\y\q\y\c\q\y\5\r\g\w\p\b\4\3\f\z\q\3\k\a\v\3\o\m\2\d\y\n\7\j\q\i\e\j\c\h\x\y\o\u\6\c\r\w\i\8\j\z\l\e\u\x\g\q\6\l\6\2\p\p\h\w\b\k\u\x\l\r\b\n\g\7\e\z\h\4\6\8\v\2\9\i\2\o\e\m\2\5\y\b\b\f\i\w\g\h\g\c\j\v\n\2\f\8\8\f\5\3\g\2\6\o\i\j\u\1\z\u\w\1\h\3\b\6\e\a\6\g\1\v\v\2\r\j\9\4\2\p\x\b\v\b\p\8\h\m\u\8\1\6\j\k\8\g\k\t\q\5\d\g\u\9\u\v\c\w\c\u\s\6\f\s\1\z\7\1\x\a\8\1\w\q\e\s\8\p\f\e\6\j\j\l\f\5\6\8\0\h\6\v\w\f\4\t\m\v\j\d\o\4\l\d\5\x\d\w\g\f\r\7\r\8\4\h\z\m\p\g\7\v\c\s\o\f\i\c\p\f\6\0\f\s\d\d\l\n\j\e\r\1\v\2\0\q\s\n\g\5\3\u\7\c\1\g\2\p\l\e\0\p\8\c\p\b\w\d\8\5\5\1\g\5\4\a\9\r\q\r\r\c\v\z\e\o\0\l\j\p\s\u\k\6\4\3\f\f\m\h\l\5\9\4\z\o\m\b\q\2\i\s\i\h\j\s\u\i\1\j\t\k\c\w\k\f\t\m\1\j\q\6\t\3\n\k\m\f\f\n\c\a\r\8\a\3\o\q\b\4\w\h\q\3\z\2\3\1\4\l\2\b\1\o\3\o\5 ]] 00:08:31.199 10:13:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.199 10:13:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:31.459 [2024-07-26 10:13:44.695042] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:31.459 [2024-07-26 10:13:44.695161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70200 ] 00:08:31.459 [2024-07-26 10:13:44.833092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.718 [2024-07-26 10:13:44.928847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.977  Copying: 512/512 [B] (average 500 kBps) 00:08:31.977 00:08:31.977 10:13:45 -- dd/posix.sh@93 -- # [[ w5jbtz5zoz4v9qimk4g82wuwfiovg8zwtyytiq2hv5buuicmoh4stosqyw5xjcsh9dm1xbw7s7c030jc4bp442gtm9c8ids742luh6cdayzptnlb8f2c6jz67obqkohj59vr7lr6lyqycqy5rgwpb43fzq3kav3om2dyn7jqiejchxyou6crwi8jzleuxgq6l62pphwbkuxlrbng7ezh468v29i2oem25ybbfiwghgcjvn2f88f53g26oiju1zuw1h3b6ea6g1vv2rj942pxbvbp8hmu816jk8gktq5dgu9uvcwcus6fs1z71xa81wqes8pfe6jjlf5680h6vwf4tmvjdo4ld5xdwgfr7r84hzmpg7vcsoficpf60fsddlnjer1v20qsng53u7c1g2ple0p8cpbwd8551g54a9rqrrcvzeo0ljpsuk643ffmhl594zombq2isihjsui1jtkcwkftm1jq6t3nkmffncar8a3oqb4whq3z2314l2b1o3o5 == \w\5\j\b\t\z\5\z\o\z\4\v\9\q\i\m\k\4\g\8\2\w\u\w\f\i\o\v\g\8\z\w\t\y\y\t\i\q\2\h\v\5\b\u\u\i\c\m\o\h\4\s\t\o\s\q\y\w\5\x\j\c\s\h\9\d\m\1\x\b\w\7\s\7\c\0\3\0\j\c\4\b\p\4\4\2\g\t\m\9\c\8\i\d\s\7\4\2\l\u\h\6\c\d\a\y\z\p\t\n\l\b\8\f\2\c\6\j\z\6\7\o\b\q\k\o\h\j\5\9\v\r\7\l\r\6\l\y\q\y\c\q\y\5\r\g\w\p\b\4\3\f\z\q\3\k\a\v\3\o\m\2\d\y\n\7\j\q\i\e\j\c\h\x\y\o\u\6\c\r\w\i\8\j\z\l\e\u\x\g\q\6\l\6\2\p\p\h\w\b\k\u\x\l\r\b\n\g\7\e\z\h\4\6\8\v\2\9\i\2\o\e\m\2\5\y\b\b\f\i\w\g\h\g\c\j\v\n\2\f\8\8\f\5\3\g\2\6\o\i\j\u\1\z\u\w\1\h\3\b\6\e\a\6\g\1\v\v\2\r\j\9\4\2\p\x\b\v\b\p\8\h\m\u\8\1\6\j\k\8\g\k\t\q\5\d\g\u\9\u\v\c\w\c\u\s\6\f\s\1\z\7\1\x\a\8\1\w\q\e\s\8\p\f\e\6\j\j\l\f\5\6\8\0\h\6\v\w\f\4\t\m\v\j\d\o\4\l\d\5\x\d\w\g\f\r\7\r\8\4\h\z\m\p\g\7\v\c\s\o\f\i\c\p\f\6\0\f\s\d\d\l\n\j\e\r\1\v\2\0\q\s\n\g\5\3\u\7\c\1\g\2\p\l\e\0\p\8\c\p\b\w\d\8\5\5\1\g\5\4\a\9\r\q\r\r\c\v\z\e\o\0\l\j\p\s\u\k\6\4\3\f\f\m\h\l\5\9\4\z\o\m\b\q\2\i\s\i\h\j\s\u\i\1\j\t\k\c\w\k\f\t\m\1\j\q\6\t\3\n\k\m\f\f\n\c\a\r\8\a\3\o\q\b\4\w\h\q\3\z\2\3\1\4\l\2\b\1\o\3\o\5 ]] 00:08:31.977 10:13:45 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:31.977 10:13:45 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:31.977 10:13:45 -- dd/common.sh@98 -- # xtrace_disable 00:08:31.977 10:13:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.977 10:13:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.977 10:13:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:31.977 [2024-07-26 10:13:45.305554] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:31.977 [2024-07-26 10:13:45.305673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:08:32.254 [2024-07-26 10:13:45.436728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.254 [2024-07-26 10:13:45.538053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.513  Copying: 512/512 [B] (average 500 kBps) 00:08:32.513 00:08:32.513 10:13:45 -- dd/posix.sh@93 -- # [[ k5iyn0k1xd9kvm8ssf83mz3lkpy7vvjvqto58qcqyni33ywm6ghclmyu1fnr6bb0vhe48mdvjux7mv9j42tjlctu5f5kwdxvhab6o0komvi4edk65cgmxu9xlkz2u2psn76cs7wxppp3gdp1h10zpbeo6rlreds6xek9dp1v68m563c7i8lzdefyyj2f6qj98lf07srxozqm6fmxw3df3whfeuynedqmxl4iet8kagi2irwf7x2ldw5rmees6fvnsyzujq03wpuiotwudlpuvla3umtpg9d7o85lp2va3zukt385kdb41hwfctktnum8nmp92mczly3lov1clxrdk86rcn8eupm4mpy7ev0pufa05jdoo0v5e7h72dgopapaoaze2ec9k693hvbh99gd3hu6jik7mnts30tizdj62nhpeuuforezv9gssjj6azl11k81gtf29714j8vx9v54yqbjvy4m6bdyx0v9pvvblwrs78pfwb5367l3kq3ykr3h == \k\5\i\y\n\0\k\1\x\d\9\k\v\m\8\s\s\f\8\3\m\z\3\l\k\p\y\7\v\v\j\v\q\t\o\5\8\q\c\q\y\n\i\3\3\y\w\m\6\g\h\c\l\m\y\u\1\f\n\r\6\b\b\0\v\h\e\4\8\m\d\v\j\u\x\7\m\v\9\j\4\2\t\j\l\c\t\u\5\f\5\k\w\d\x\v\h\a\b\6\o\0\k\o\m\v\i\4\e\d\k\6\5\c\g\m\x\u\9\x\l\k\z\2\u\2\p\s\n\7\6\c\s\7\w\x\p\p\p\3\g\d\p\1\h\1\0\z\p\b\e\o\6\r\l\r\e\d\s\6\x\e\k\9\d\p\1\v\6\8\m\5\6\3\c\7\i\8\l\z\d\e\f\y\y\j\2\f\6\q\j\9\8\l\f\0\7\s\r\x\o\z\q\m\6\f\m\x\w\3\d\f\3\w\h\f\e\u\y\n\e\d\q\m\x\l\4\i\e\t\8\k\a\g\i\2\i\r\w\f\7\x\2\l\d\w\5\r\m\e\e\s\6\f\v\n\s\y\z\u\j\q\0\3\w\p\u\i\o\t\w\u\d\l\p\u\v\l\a\3\u\m\t\p\g\9\d\7\o\8\5\l\p\2\v\a\3\z\u\k\t\3\8\5\k\d\b\4\1\h\w\f\c\t\k\t\n\u\m\8\n\m\p\9\2\m\c\z\l\y\3\l\o\v\1\c\l\x\r\d\k\8\6\r\c\n\8\e\u\p\m\4\m\p\y\7\e\v\0\p\u\f\a\0\5\j\d\o\o\0\v\5\e\7\h\7\2\d\g\o\p\a\p\a\o\a\z\e\2\e\c\9\k\6\9\3\h\v\b\h\9\9\g\d\3\h\u\6\j\i\k\7\m\n\t\s\3\0\t\i\z\d\j\6\2\n\h\p\e\u\u\f\o\r\e\z\v\9\g\s\s\j\j\6\a\z\l\1\1\k\8\1\g\t\f\2\9\7\1\4\j\8\v\x\9\v\5\4\y\q\b\j\v\y\4\m\6\b\d\y\x\0\v\9\p\v\v\b\l\w\r\s\7\8\p\f\w\b\5\3\6\7\l\3\k\q\3\y\k\r\3\h ]] 00:08:32.513 10:13:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.513 10:13:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:32.513 [2024-07-26 10:13:45.923745] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:32.513 [2024-07-26 10:13:45.923854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70215 ] 00:08:32.771 [2024-07-26 10:13:46.060637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.772 [2024-07-26 10:13:46.164732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.289  Copying: 512/512 [B] (average 500 kBps) 00:08:33.289 00:08:33.289 10:13:46 -- dd/posix.sh@93 -- # [[ k5iyn0k1xd9kvm8ssf83mz3lkpy7vvjvqto58qcqyni33ywm6ghclmyu1fnr6bb0vhe48mdvjux7mv9j42tjlctu5f5kwdxvhab6o0komvi4edk65cgmxu9xlkz2u2psn76cs7wxppp3gdp1h10zpbeo6rlreds6xek9dp1v68m563c7i8lzdefyyj2f6qj98lf07srxozqm6fmxw3df3whfeuynedqmxl4iet8kagi2irwf7x2ldw5rmees6fvnsyzujq03wpuiotwudlpuvla3umtpg9d7o85lp2va3zukt385kdb41hwfctktnum8nmp92mczly3lov1clxrdk86rcn8eupm4mpy7ev0pufa05jdoo0v5e7h72dgopapaoaze2ec9k693hvbh99gd3hu6jik7mnts30tizdj62nhpeuuforezv9gssjj6azl11k81gtf29714j8vx9v54yqbjvy4m6bdyx0v9pvvblwrs78pfwb5367l3kq3ykr3h == \k\5\i\y\n\0\k\1\x\d\9\k\v\m\8\s\s\f\8\3\m\z\3\l\k\p\y\7\v\v\j\v\q\t\o\5\8\q\c\q\y\n\i\3\3\y\w\m\6\g\h\c\l\m\y\u\1\f\n\r\6\b\b\0\v\h\e\4\8\m\d\v\j\u\x\7\m\v\9\j\4\2\t\j\l\c\t\u\5\f\5\k\w\d\x\v\h\a\b\6\o\0\k\o\m\v\i\4\e\d\k\6\5\c\g\m\x\u\9\x\l\k\z\2\u\2\p\s\n\7\6\c\s\7\w\x\p\p\p\3\g\d\p\1\h\1\0\z\p\b\e\o\6\r\l\r\e\d\s\6\x\e\k\9\d\p\1\v\6\8\m\5\6\3\c\7\i\8\l\z\d\e\f\y\y\j\2\f\6\q\j\9\8\l\f\0\7\s\r\x\o\z\q\m\6\f\m\x\w\3\d\f\3\w\h\f\e\u\y\n\e\d\q\m\x\l\4\i\e\t\8\k\a\g\i\2\i\r\w\f\7\x\2\l\d\w\5\r\m\e\e\s\6\f\v\n\s\y\z\u\j\q\0\3\w\p\u\i\o\t\w\u\d\l\p\u\v\l\a\3\u\m\t\p\g\9\d\7\o\8\5\l\p\2\v\a\3\z\u\k\t\3\8\5\k\d\b\4\1\h\w\f\c\t\k\t\n\u\m\8\n\m\p\9\2\m\c\z\l\y\3\l\o\v\1\c\l\x\r\d\k\8\6\r\c\n\8\e\u\p\m\4\m\p\y\7\e\v\0\p\u\f\a\0\5\j\d\o\o\0\v\5\e\7\h\7\2\d\g\o\p\a\p\a\o\a\z\e\2\e\c\9\k\6\9\3\h\v\b\h\9\9\g\d\3\h\u\6\j\i\k\7\m\n\t\s\3\0\t\i\z\d\j\6\2\n\h\p\e\u\u\f\o\r\e\z\v\9\g\s\s\j\j\6\a\z\l\1\1\k\8\1\g\t\f\2\9\7\1\4\j\8\v\x\9\v\5\4\y\q\b\j\v\y\4\m\6\b\d\y\x\0\v\9\p\v\v\b\l\w\r\s\7\8\p\f\w\b\5\3\6\7\l\3\k\q\3\y\k\r\3\h ]] 00:08:33.289 10:13:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.289 10:13:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:33.289 [2024-07-26 10:13:46.561361] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:33.289 [2024-07-26 10:13:46.561481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:08:33.289 [2024-07-26 10:13:46.695152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.548 [2024-07-26 10:13:46.797957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.807  Copying: 512/512 [B] (average 500 kBps) 00:08:33.807 00:08:33.808 10:13:47 -- dd/posix.sh@93 -- # [[ k5iyn0k1xd9kvm8ssf83mz3lkpy7vvjvqto58qcqyni33ywm6ghclmyu1fnr6bb0vhe48mdvjux7mv9j42tjlctu5f5kwdxvhab6o0komvi4edk65cgmxu9xlkz2u2psn76cs7wxppp3gdp1h10zpbeo6rlreds6xek9dp1v68m563c7i8lzdefyyj2f6qj98lf07srxozqm6fmxw3df3whfeuynedqmxl4iet8kagi2irwf7x2ldw5rmees6fvnsyzujq03wpuiotwudlpuvla3umtpg9d7o85lp2va3zukt385kdb41hwfctktnum8nmp92mczly3lov1clxrdk86rcn8eupm4mpy7ev0pufa05jdoo0v5e7h72dgopapaoaze2ec9k693hvbh99gd3hu6jik7mnts30tizdj62nhpeuuforezv9gssjj6azl11k81gtf29714j8vx9v54yqbjvy4m6bdyx0v9pvvblwrs78pfwb5367l3kq3ykr3h == \k\5\i\y\n\0\k\1\x\d\9\k\v\m\8\s\s\f\8\3\m\z\3\l\k\p\y\7\v\v\j\v\q\t\o\5\8\q\c\q\y\n\i\3\3\y\w\m\6\g\h\c\l\m\y\u\1\f\n\r\6\b\b\0\v\h\e\4\8\m\d\v\j\u\x\7\m\v\9\j\4\2\t\j\l\c\t\u\5\f\5\k\w\d\x\v\h\a\b\6\o\0\k\o\m\v\i\4\e\d\k\6\5\c\g\m\x\u\9\x\l\k\z\2\u\2\p\s\n\7\6\c\s\7\w\x\p\p\p\3\g\d\p\1\h\1\0\z\p\b\e\o\6\r\l\r\e\d\s\6\x\e\k\9\d\p\1\v\6\8\m\5\6\3\c\7\i\8\l\z\d\e\f\y\y\j\2\f\6\q\j\9\8\l\f\0\7\s\r\x\o\z\q\m\6\f\m\x\w\3\d\f\3\w\h\f\e\u\y\n\e\d\q\m\x\l\4\i\e\t\8\k\a\g\i\2\i\r\w\f\7\x\2\l\d\w\5\r\m\e\e\s\6\f\v\n\s\y\z\u\j\q\0\3\w\p\u\i\o\t\w\u\d\l\p\u\v\l\a\3\u\m\t\p\g\9\d\7\o\8\5\l\p\2\v\a\3\z\u\k\t\3\8\5\k\d\b\4\1\h\w\f\c\t\k\t\n\u\m\8\n\m\p\9\2\m\c\z\l\y\3\l\o\v\1\c\l\x\r\d\k\8\6\r\c\n\8\e\u\p\m\4\m\p\y\7\e\v\0\p\u\f\a\0\5\j\d\o\o\0\v\5\e\7\h\7\2\d\g\o\p\a\p\a\o\a\z\e\2\e\c\9\k\6\9\3\h\v\b\h\9\9\g\d\3\h\u\6\j\i\k\7\m\n\t\s\3\0\t\i\z\d\j\6\2\n\h\p\e\u\u\f\o\r\e\z\v\9\g\s\s\j\j\6\a\z\l\1\1\k\8\1\g\t\f\2\9\7\1\4\j\8\v\x\9\v\5\4\y\q\b\j\v\y\4\m\6\b\d\y\x\0\v\9\p\v\v\b\l\w\r\s\7\8\p\f\w\b\5\3\6\7\l\3\k\q\3\y\k\r\3\h ]] 00:08:33.808 10:13:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.808 10:13:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:33.808 [2024-07-26 10:13:47.163606] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:33.808 [2024-07-26 10:13:47.163704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70230 ] 00:08:34.066 [2024-07-26 10:13:47.298203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.066 [2024-07-26 10:13:47.404687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.325  Copying: 512/512 [B] (average 166 kBps) 00:08:34.325 00:08:34.325 10:13:47 -- dd/posix.sh@93 -- # [[ k5iyn0k1xd9kvm8ssf83mz3lkpy7vvjvqto58qcqyni33ywm6ghclmyu1fnr6bb0vhe48mdvjux7mv9j42tjlctu5f5kwdxvhab6o0komvi4edk65cgmxu9xlkz2u2psn76cs7wxppp3gdp1h10zpbeo6rlreds6xek9dp1v68m563c7i8lzdefyyj2f6qj98lf07srxozqm6fmxw3df3whfeuynedqmxl4iet8kagi2irwf7x2ldw5rmees6fvnsyzujq03wpuiotwudlpuvla3umtpg9d7o85lp2va3zukt385kdb41hwfctktnum8nmp92mczly3lov1clxrdk86rcn8eupm4mpy7ev0pufa05jdoo0v5e7h72dgopapaoaze2ec9k693hvbh99gd3hu6jik7mnts30tizdj62nhpeuuforezv9gssjj6azl11k81gtf29714j8vx9v54yqbjvy4m6bdyx0v9pvvblwrs78pfwb5367l3kq3ykr3h == \k\5\i\y\n\0\k\1\x\d\9\k\v\m\8\s\s\f\8\3\m\z\3\l\k\p\y\7\v\v\j\v\q\t\o\5\8\q\c\q\y\n\i\3\3\y\w\m\6\g\h\c\l\m\y\u\1\f\n\r\6\b\b\0\v\h\e\4\8\m\d\v\j\u\x\7\m\v\9\j\4\2\t\j\l\c\t\u\5\f\5\k\w\d\x\v\h\a\b\6\o\0\k\o\m\v\i\4\e\d\k\6\5\c\g\m\x\u\9\x\l\k\z\2\u\2\p\s\n\7\6\c\s\7\w\x\p\p\p\3\g\d\p\1\h\1\0\z\p\b\e\o\6\r\l\r\e\d\s\6\x\e\k\9\d\p\1\v\6\8\m\5\6\3\c\7\i\8\l\z\d\e\f\y\y\j\2\f\6\q\j\9\8\l\f\0\7\s\r\x\o\z\q\m\6\f\m\x\w\3\d\f\3\w\h\f\e\u\y\n\e\d\q\m\x\l\4\i\e\t\8\k\a\g\i\2\i\r\w\f\7\x\2\l\d\w\5\r\m\e\e\s\6\f\v\n\s\y\z\u\j\q\0\3\w\p\u\i\o\t\w\u\d\l\p\u\v\l\a\3\u\m\t\p\g\9\d\7\o\8\5\l\p\2\v\a\3\z\u\k\t\3\8\5\k\d\b\4\1\h\w\f\c\t\k\t\n\u\m\8\n\m\p\9\2\m\c\z\l\y\3\l\o\v\1\c\l\x\r\d\k\8\6\r\c\n\8\e\u\p\m\4\m\p\y\7\e\v\0\p\u\f\a\0\5\j\d\o\o\0\v\5\e\7\h\7\2\d\g\o\p\a\p\a\o\a\z\e\2\e\c\9\k\6\9\3\h\v\b\h\9\9\g\d\3\h\u\6\j\i\k\7\m\n\t\s\3\0\t\i\z\d\j\6\2\n\h\p\e\u\u\f\o\r\e\z\v\9\g\s\s\j\j\6\a\z\l\1\1\k\8\1\g\t\f\2\9\7\1\4\j\8\v\x\9\v\5\4\y\q\b\j\v\y\4\m\6\b\d\y\x\0\v\9\p\v\v\b\l\w\r\s\7\8\p\f\w\b\5\3\6\7\l\3\k\q\3\y\k\r\3\h ]] 00:08:34.325 00:08:34.325 real 0m4.980s 00:08:34.325 user 0m2.840s 00:08:34.325 sys 0m1.161s 00:08:34.325 10:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.325 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 ************************************ 00:08:34.325 END TEST dd_flags_misc_forced_aio 00:08:34.325 ************************************ 00:08:34.584 10:13:47 -- dd/posix.sh@1 -- # cleanup 00:08:34.584 10:13:47 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:34.584 10:13:47 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:34.584 00:08:34.584 real 0m22.614s 00:08:34.584 user 0m11.514s 00:08:34.584 sys 0m5.232s 00:08:34.584 10:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.584 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 ************************************ 00:08:34.584 END TEST spdk_dd_posix 00:08:34.584 ************************************ 00:08:34.584 10:13:47 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:34.584 10:13:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.584 10:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.584 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 ************************************ 00:08:34.584 START TEST spdk_dd_malloc 00:08:34.584 ************************************ 00:08:34.584 10:13:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:34.584 * Looking for test storage... 00:08:34.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:34.584 10:13:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.584 10:13:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.584 10:13:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.584 10:13:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.584 10:13:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.584 10:13:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.584 10:13:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.584 10:13:47 -- paths/export.sh@5 -- # export PATH 00:08:34.584 10:13:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.584 10:13:47 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:34.584 10:13:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.584 10:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.584 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 ************************************ 00:08:34.584 START TEST dd_malloc_copy 00:08:34.584 ************************************ 00:08:34.584 10:13:47 -- common/autotest_common.sh@1104 -- # malloc_copy 00:08:34.584 10:13:47 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:34.584 10:13:47 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:34.584 10:13:47 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:34.584 10:13:47 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:34.584 10:13:47 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:34.584 10:13:47 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:34.584 10:13:47 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:34.584 10:13:47 -- dd/malloc.sh@28 -- # gen_conf 00:08:34.584 10:13:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.584 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 [2024-07-26 10:13:47.986174] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:34.584 [2024-07-26 10:13:47.986801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70303 ] 00:08:34.584 { 00:08:34.584 "subsystems": [ 00:08:34.584 { 00:08:34.584 "subsystem": "bdev", 00:08:34.584 "config": [ 00:08:34.584 { 00:08:34.584 "params": { 00:08:34.584 "block_size": 512, 00:08:34.584 "num_blocks": 1048576, 00:08:34.584 "name": "malloc0" 00:08:34.584 }, 00:08:34.584 "method": "bdev_malloc_create" 00:08:34.584 }, 00:08:34.584 { 00:08:34.584 "params": { 00:08:34.584 "block_size": 512, 00:08:34.584 "num_blocks": 1048576, 00:08:34.584 "name": "malloc1" 00:08:34.584 }, 00:08:34.584 "method": "bdev_malloc_create" 00:08:34.584 }, 00:08:34.584 { 00:08:34.584 "method": "bdev_wait_for_examine" 00:08:34.584 } 00:08:34.584 ] 00:08:34.584 } 00:08:34.584 ] 00:08:34.584 } 00:08:34.843 [2024-07-26 10:13:48.127949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.843 [2024-07-26 10:13:48.236722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.422  Copying: 205/512 [MB] (205 MBps) Copying: 409/512 [MB] (204 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:08:38.422 00:08:38.422 10:13:51 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:38.422 10:13:51 -- dd/malloc.sh@33 -- # gen_conf 00:08:38.422 10:13:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.422 10:13:51 -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 [2024-07-26 10:13:51.799994] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:38.422 [2024-07-26 10:13:51.800128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70356 ] 00:08:38.422 { 00:08:38.422 "subsystems": [ 00:08:38.422 { 00:08:38.422 "subsystem": "bdev", 00:08:38.422 "config": [ 00:08:38.422 { 00:08:38.422 "params": { 00:08:38.422 "block_size": 512, 00:08:38.422 "num_blocks": 1048576, 00:08:38.422 "name": "malloc0" 00:08:38.422 }, 00:08:38.422 "method": "bdev_malloc_create" 00:08:38.422 }, 00:08:38.422 { 00:08:38.422 "params": { 00:08:38.422 "block_size": 512, 00:08:38.422 "num_blocks": 1048576, 00:08:38.422 "name": "malloc1" 00:08:38.422 }, 00:08:38.422 "method": "bdev_malloc_create" 00:08:38.422 }, 00:08:38.422 { 00:08:38.422 "method": "bdev_wait_for_examine" 00:08:38.422 } 00:08:38.422 ] 00:08:38.422 } 00:08:38.422 ] 00:08:38.422 } 00:08:38.681 [2024-07-26 10:13:51.933499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.681 [2024-07-26 10:13:52.033910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.162  Copying: 205/512 [MB] (205 MBps) Copying: 410/512 [MB] (205 MBps) Copying: 512/512 [MB] (average 205 MBps) 00:08:42.162 00:08:42.162 00:08:42.162 real 0m7.583s 00:08:42.162 user 0m6.564s 00:08:42.162 sys 0m0.854s 00:08:42.162 10:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.162 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 ************************************ 00:08:42.162 END TEST dd_malloc_copy 00:08:42.162 ************************************ 00:08:42.162 00:08:42.162 real 0m7.711s 00:08:42.162 user 0m6.618s 00:08:42.162 sys 0m0.929s 00:08:42.162 10:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.162 ************************************ 00:08:42.162 END TEST spdk_dd_malloc 00:08:42.162 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 ************************************ 00:08:42.162 10:13:55 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:42.162 10:13:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:42.162 10:13:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.162 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 ************************************ 00:08:42.162 START TEST spdk_dd_bdev_to_bdev 00:08:42.162 ************************************ 00:08:42.162 10:13:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:42.420 * Looking for test storage... 00:08:42.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.420 10:13:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.420 10:13:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.421 10:13:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.421 10:13:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.421 10:13:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 10:13:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 10:13:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 10:13:55 -- paths/export.sh@5 -- # export PATH 00:08:42.421 10:13:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:42.421 10:13:55 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:42.421 10:13:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:42.421 10:13:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.421 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 ************************************ 00:08:42.421 START TEST dd_inflate_file 00:08:42.421 ************************************ 00:08:42.421 10:13:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:42.421 [2024-07-26 10:13:55.750685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:42.421 [2024-07-26 10:13:55.750780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70460 ] 00:08:42.680 [2024-07-26 10:13:55.889250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.680 [2024-07-26 10:13:55.987127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.938  Copying: 64/64 [MB] (average 1641 MBps) 00:08:42.938 00:08:42.938 00:08:42.938 real 0m0.684s 00:08:42.938 user 0m0.388s 00:08:42.938 sys 0m0.179s 00:08:42.938 10:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.938 ************************************ 00:08:42.938 END TEST dd_inflate_file 00:08:42.938 ************************************ 00:08:42.938 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 10:13:56 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:43.197 10:13:56 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:43.197 10:13:56 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:43.197 10:13:56 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:43.197 10:13:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:43.197 10:13:56 -- dd/common.sh@31 -- # xtrace_disable 00:08:43.197 10:13:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.197 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 ************************************ 00:08:43.197 START TEST dd_copy_to_out_bdev 00:08:43.197 ************************************ 00:08:43.197 10:13:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:43.197 [2024-07-26 10:13:56.487356] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:43.197 [2024-07-26 10:13:56.487479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70496 ] 00:08:43.197 { 00:08:43.197 "subsystems": [ 00:08:43.197 { 00:08:43.197 "subsystem": "bdev", 00:08:43.197 "config": [ 00:08:43.197 { 00:08:43.197 "params": { 00:08:43.197 "trtype": "pcie", 00:08:43.197 "traddr": "0000:00:06.0", 00:08:43.197 "name": "Nvme0" 00:08:43.197 }, 00:08:43.197 "method": "bdev_nvme_attach_controller" 00:08:43.197 }, 00:08:43.197 { 00:08:43.197 "params": { 00:08:43.197 "trtype": "pcie", 00:08:43.197 "traddr": "0000:00:07.0", 00:08:43.197 "name": "Nvme1" 00:08:43.197 }, 00:08:43.197 "method": "bdev_nvme_attach_controller" 00:08:43.197 }, 00:08:43.197 { 00:08:43.197 "method": "bdev_wait_for_examine" 00:08:43.197 } 00:08:43.197 ] 00:08:43.197 } 00:08:43.197 ] 00:08:43.197 } 00:08:43.197 [2024-07-26 10:13:56.620835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.456 [2024-07-26 10:13:56.720776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.090  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 51 MBps) 00:08:45.090 00:08:45.090 ************************************ 00:08:45.090 END TEST dd_copy_to_out_bdev 00:08:45.090 ************************************ 00:08:45.090 00:08:45.090 real 0m2.030s 00:08:45.090 user 0m1.751s 00:08:45.090 sys 0m0.212s 00:08:45.090 10:13:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.090 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:45.090 10:13:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.090 10:13:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.090 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:08:45.090 ************************************ 00:08:45.090 START TEST dd_offset_magic 00:08:45.090 ************************************ 00:08:45.090 10:13:58 -- common/autotest_common.sh@1104 -- # offset_magic 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:45.090 10:13:58 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:45.090 10:13:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.091 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:08:45.349 [2024-07-26 10:13:58.570088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:45.349 [2024-07-26 10:13:58.570215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70541 ] 00:08:45.349 { 00:08:45.349 "subsystems": [ 00:08:45.349 { 00:08:45.349 "subsystem": "bdev", 00:08:45.349 "config": [ 00:08:45.349 { 00:08:45.349 "params": { 00:08:45.349 "trtype": "pcie", 00:08:45.349 "traddr": "0000:00:06.0", 00:08:45.349 "name": "Nvme0" 00:08:45.349 }, 00:08:45.349 "method": "bdev_nvme_attach_controller" 00:08:45.349 }, 00:08:45.349 { 00:08:45.349 "params": { 00:08:45.349 "trtype": "pcie", 00:08:45.349 "traddr": "0000:00:07.0", 00:08:45.349 "name": "Nvme1" 00:08:45.349 }, 00:08:45.349 "method": "bdev_nvme_attach_controller" 00:08:45.349 }, 00:08:45.349 { 00:08:45.349 "method": "bdev_wait_for_examine" 00:08:45.349 } 00:08:45.349 ] 00:08:45.349 } 00:08:45.349 ] 00:08:45.349 } 00:08:45.349 [2024-07-26 10:13:58.709885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.606 [2024-07-26 10:13:58.808542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.122  Copying: 65/65 [MB] (average 928 MBps) 00:08:46.122 00:08:46.122 10:13:59 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:46.122 10:13:59 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:46.122 10:13:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.122 10:13:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.122 [2024-07-26 10:13:59.447824] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:46.122 [2024-07-26 10:13:59.447941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70555 ] 00:08:46.122 { 00:08:46.122 "subsystems": [ 00:08:46.122 { 00:08:46.122 "subsystem": "bdev", 00:08:46.122 "config": [ 00:08:46.122 { 00:08:46.122 "params": { 00:08:46.122 "trtype": "pcie", 00:08:46.122 "traddr": "0000:00:06.0", 00:08:46.122 "name": "Nvme0" 00:08:46.122 }, 00:08:46.122 "method": "bdev_nvme_attach_controller" 00:08:46.122 }, 00:08:46.122 { 00:08:46.122 "params": { 00:08:46.122 "trtype": "pcie", 00:08:46.122 "traddr": "0000:00:07.0", 00:08:46.122 "name": "Nvme1" 00:08:46.122 }, 00:08:46.122 "method": "bdev_nvme_attach_controller" 00:08:46.122 }, 00:08:46.122 { 00:08:46.122 "method": "bdev_wait_for_examine" 00:08:46.122 } 00:08:46.122 ] 00:08:46.122 } 00:08:46.122 ] 00:08:46.122 } 00:08:46.380 [2024-07-26 10:13:59.588633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.380 [2024-07-26 10:13:59.707783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.896  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:46.897 00:08:46.897 10:14:00 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:46.897 10:14:00 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:46.897 10:14:00 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:46.897 10:14:00 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:46.897 10:14:00 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:46.897 10:14:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.897 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:08:46.897 [2024-07-26 10:14:00.230786] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:46.897 [2024-07-26 10:14:00.230884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70575 ] 00:08:46.897 { 00:08:46.897 "subsystems": [ 00:08:46.897 { 00:08:46.897 "subsystem": "bdev", 00:08:46.897 "config": [ 00:08:46.897 { 00:08:46.897 "params": { 00:08:46.897 "trtype": "pcie", 00:08:46.897 "traddr": "0000:00:06.0", 00:08:46.897 "name": "Nvme0" 00:08:46.897 }, 00:08:46.897 "method": "bdev_nvme_attach_controller" 00:08:46.897 }, 00:08:46.897 { 00:08:46.897 "params": { 00:08:46.897 "trtype": "pcie", 00:08:46.897 "traddr": "0000:00:07.0", 00:08:46.897 "name": "Nvme1" 00:08:46.897 }, 00:08:46.897 "method": "bdev_nvme_attach_controller" 00:08:46.897 }, 00:08:46.897 { 00:08:46.897 "method": "bdev_wait_for_examine" 00:08:46.897 } 00:08:46.897 ] 00:08:46.897 } 00:08:46.897 ] 00:08:46.897 } 00:08:47.155 [2024-07-26 10:14:00.361878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.155 [2024-07-26 10:14:00.460679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.671  Copying: 65/65 [MB] (average 970 MBps) 00:08:47.671 00:08:47.671 10:14:01 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:47.671 10:14:01 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:47.671 10:14:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:47.671 10:14:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.671 [2024-07-26 10:14:01.074021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:47.671 [2024-07-26 10:14:01.074135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70590 ] 00:08:47.671 { 00:08:47.671 "subsystems": [ 00:08:47.671 { 00:08:47.671 "subsystem": "bdev", 00:08:47.671 "config": [ 00:08:47.671 { 00:08:47.671 "params": { 00:08:47.671 "trtype": "pcie", 00:08:47.671 "traddr": "0000:00:06.0", 00:08:47.671 "name": "Nvme0" 00:08:47.671 }, 00:08:47.671 "method": "bdev_nvme_attach_controller" 00:08:47.671 }, 00:08:47.671 { 00:08:47.671 "params": { 00:08:47.671 "trtype": "pcie", 00:08:47.671 "traddr": "0000:00:07.0", 00:08:47.671 "name": "Nvme1" 00:08:47.671 }, 00:08:47.671 "method": "bdev_nvme_attach_controller" 00:08:47.671 }, 00:08:47.671 { 00:08:47.671 "method": "bdev_wait_for_examine" 00:08:47.671 } 00:08:47.671 ] 00:08:47.671 } 00:08:47.671 ] 00:08:47.671 } 00:08:47.929 [2024-07-26 10:14:01.215682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.929 [2024-07-26 10:14:01.326761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.484  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:48.484 00:08:48.484 10:14:01 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:48.484 10:14:01 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:48.484 00:08:48.484 real 0m3.294s 00:08:48.484 user 0m2.387s 00:08:48.484 sys 0m0.721s 00:08:48.484 10:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.484 10:14:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 ************************************ 00:08:48.484 END TEST dd_offset_magic 00:08:48.484 ************************************ 00:08:48.484 10:14:01 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:48.484 10:14:01 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:48.484 10:14:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:48.484 10:14:01 -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.484 10:14:01 -- dd/common.sh@12 -- # local size=4194330 00:08:48.484 10:14:01 -- dd/common.sh@14 -- # local bs=1048576 00:08:48.484 10:14:01 -- dd/common.sh@15 -- # local count=5 00:08:48.484 10:14:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:48.484 10:14:01 -- dd/common.sh@18 -- # gen_conf 00:08:48.484 10:14:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:48.484 10:14:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 [2024-07-26 10:14:01.901112] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:48.484 [2024-07-26 10:14:01.901213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:08:48.484 { 00:08:48.484 "subsystems": [ 00:08:48.484 { 00:08:48.484 "subsystem": "bdev", 00:08:48.484 "config": [ 00:08:48.484 { 00:08:48.484 "params": { 00:08:48.484 "trtype": "pcie", 00:08:48.484 "traddr": "0000:00:06.0", 00:08:48.484 "name": "Nvme0" 00:08:48.484 }, 00:08:48.484 "method": "bdev_nvme_attach_controller" 00:08:48.484 }, 00:08:48.484 { 00:08:48.484 "params": { 00:08:48.484 "trtype": "pcie", 00:08:48.484 "traddr": "0000:00:07.0", 00:08:48.484 "name": "Nvme1" 00:08:48.484 }, 00:08:48.484 "method": "bdev_nvme_attach_controller" 00:08:48.484 }, 00:08:48.484 { 00:08:48.484 "method": "bdev_wait_for_examine" 00:08:48.484 } 00:08:48.484 ] 00:08:48.484 } 00:08:48.484 ] 00:08:48.484 } 00:08:48.741 [2024-07-26 10:14:02.035765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.741 [2024-07-26 10:14:02.144880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.257  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:49.257 00:08:49.257 10:14:02 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:49.257 10:14:02 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:49.257 10:14:02 -- dd/common.sh@11 -- # local nvme_ref= 00:08:49.257 10:14:02 -- dd/common.sh@12 -- # local size=4194330 00:08:49.257 10:14:02 -- dd/common.sh@14 -- # local bs=1048576 00:08:49.257 10:14:02 -- dd/common.sh@15 -- # local count=5 00:08:49.257 10:14:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:49.257 10:14:02 -- dd/common.sh@18 -- # gen_conf 00:08:49.257 10:14:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:49.257 10:14:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.257 [2024-07-26 10:14:02.690874] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:49.257 [2024-07-26 10:14:02.690985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70645 ] 00:08:49.257 { 00:08:49.257 "subsystems": [ 00:08:49.257 { 00:08:49.257 "subsystem": "bdev", 00:08:49.257 "config": [ 00:08:49.257 { 00:08:49.257 "params": { 00:08:49.257 "trtype": "pcie", 00:08:49.257 "traddr": "0000:00:06.0", 00:08:49.257 "name": "Nvme0" 00:08:49.257 }, 00:08:49.257 "method": "bdev_nvme_attach_controller" 00:08:49.257 }, 00:08:49.257 { 00:08:49.257 "params": { 00:08:49.257 "trtype": "pcie", 00:08:49.257 "traddr": "0000:00:07.0", 00:08:49.257 "name": "Nvme1" 00:08:49.257 }, 00:08:49.257 "method": "bdev_nvme_attach_controller" 00:08:49.257 }, 00:08:49.257 { 00:08:49.257 "method": "bdev_wait_for_examine" 00:08:49.257 } 00:08:49.257 ] 00:08:49.257 } 00:08:49.257 ] 00:08:49.257 } 00:08:49.516 [2024-07-26 10:14:02.829351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.516 [2024-07-26 10:14:02.931119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.032  Copying: 5120/5120 [kB] (average 714 MBps) 00:08:50.032 00:08:50.032 10:14:03 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:50.032 ************************************ 00:08:50.032 END TEST spdk_dd_bdev_to_bdev 00:08:50.032 ************************************ 00:08:50.032 00:08:50.032 real 0m7.849s 00:08:50.032 user 0m5.737s 00:08:50.032 sys 0m1.622s 00:08:50.032 10:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.032 10:14:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 10:14:03 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:50.291 10:14:03 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:50.291 10:14:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.291 10:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.291 10:14:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 ************************************ 00:08:50.291 START TEST spdk_dd_uring 00:08:50.291 ************************************ 00:08:50.291 10:14:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:50.291 * Looking for test storage... 00:08:50.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:50.291 10:14:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.291 10:14:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.291 10:14:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.291 10:14:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.291 10:14:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.291 10:14:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.291 10:14:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.291 10:14:03 -- paths/export.sh@5 -- # export PATH 00:08:50.291 10:14:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.291 10:14:03 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:50.291 10:14:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.291 10:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.291 10:14:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 ************************************ 00:08:50.291 START TEST dd_uring_copy 00:08:50.291 ************************************ 00:08:50.291 10:14:03 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:08:50.291 10:14:03 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:50.291 10:14:03 -- dd/uring.sh@16 -- # local magic 00:08:50.291 10:14:03 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:50.291 10:14:03 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:50.291 10:14:03 -- dd/uring.sh@19 -- # local verify_magic 00:08:50.291 10:14:03 -- dd/uring.sh@21 -- # init_zram 00:08:50.291 10:14:03 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:50.291 10:14:03 -- dd/common.sh@164 -- # return 00:08:50.291 10:14:03 -- dd/uring.sh@22 -- # create_zram_dev 00:08:50.291 10:14:03 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:50.291 10:14:03 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:50.291 10:14:03 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:50.291 10:14:03 -- dd/common.sh@181 -- # local id=1 00:08:50.291 10:14:03 -- dd/common.sh@182 -- # local size=512M 00:08:50.291 10:14:03 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:50.291 10:14:03 -- dd/common.sh@186 -- # echo 512M 00:08:50.291 10:14:03 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:50.291 10:14:03 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:50.291 10:14:03 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:50.291 10:14:03 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:50.291 10:14:03 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.291 10:14:03 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:50.291 10:14:03 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:50.291 10:14:03 -- dd/common.sh@98 -- # xtrace_disable 00:08:50.291 10:14:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 10:14:03 -- dd/uring.sh@41 -- # magic=yg4ck1ca64gq8rmv0x74fp8eu8hwxttvdtqkl2jtapgehhnz2hpldjlb9x7eltl118gjdcckkt3mvqucc3j5gkz6rallkola7dyfw66wm952nd24c5veod48dlgq7j5ejksaylnb8r02lw5qb72srwojjubuk6w9a4miyjzbc997ciz5718hh7sxdhud0qz0uvvmq9zn9rk6qe7z0kbm3y3shu1cnd3wv774z13fuzqeivwxvyy9dppu4cppxqp0nntuji45xhrnfwkwsihaf5y1x86q8qculjr5coajf5r35zqlkykh723a6wv7zjz1gb9yijw1uxtl1o9whiycjl6n423kep9b4vxotj0p3i6j5kdoiq45oole7j66h4sdx8avenu50bxfka5zhanbn4hqul0apomml1lkpr031r5vp6lxle48f3eqyguxm8myp83adksxsforl437mgvg4c73rj8ok1rvu678jy3g6930db58fxv3szu938pt8k7a3a42gehddkxvf1r1ybrex4wbigyvptg507sjdg52e8hgv2rncfdamr1mp9ko3jdvk5ok9hfhfphdp7skgvjix5yo3qp8at3m954ymevvhxktx295u648b0utpbphsrhqb9w5yerm0akxwgxnupenslemewiyf86mcs9gb6pndyux5q9tnkactn3885bb2v30ocj5s5dwpvddu3m7f0nnjlvmnqkmoaiwrh7b3tbb4ybvvswnxkizzdda09q3oq4r0hrn2qyotvo6gv7lt3xe4db6w7md31jpudfot0o0dx4js7473m3lidy2do6y8deirbb6kpfdngp37g5qeyix5k8i8ks54tcfveg8871kkoqdi8ygq1zcfy4jpseifkiypkrc6rynx7fxt9j4jwdzfdqpgu3s7jm91ap9qu4vurav4drq2n8ss20htmdulye9oo8x6vhqyng5ern8zahtgivlqkkdpwo96kdo8q6raxjzu66z739cpzot7zqo6bvf 00:08:50.291 10:14:03 -- dd/uring.sh@42 -- # echo yg4ck1ca64gq8rmv0x74fp8eu8hwxttvdtqkl2jtapgehhnz2hpldjlb9x7eltl118gjdcckkt3mvqucc3j5gkz6rallkola7dyfw66wm952nd24c5veod48dlgq7j5ejksaylnb8r02lw5qb72srwojjubuk6w9a4miyjzbc997ciz5718hh7sxdhud0qz0uvvmq9zn9rk6qe7z0kbm3y3shu1cnd3wv774z13fuzqeivwxvyy9dppu4cppxqp0nntuji45xhrnfwkwsihaf5y1x86q8qculjr5coajf5r35zqlkykh723a6wv7zjz1gb9yijw1uxtl1o9whiycjl6n423kep9b4vxotj0p3i6j5kdoiq45oole7j66h4sdx8avenu50bxfka5zhanbn4hqul0apomml1lkpr031r5vp6lxle48f3eqyguxm8myp83adksxsforl437mgvg4c73rj8ok1rvu678jy3g6930db58fxv3szu938pt8k7a3a42gehddkxvf1r1ybrex4wbigyvptg507sjdg52e8hgv2rncfdamr1mp9ko3jdvk5ok9hfhfphdp7skgvjix5yo3qp8at3m954ymevvhxktx295u648b0utpbphsrhqb9w5yerm0akxwgxnupenslemewiyf86mcs9gb6pndyux5q9tnkactn3885bb2v30ocj5s5dwpvddu3m7f0nnjlvmnqkmoaiwrh7b3tbb4ybvvswnxkizzdda09q3oq4r0hrn2qyotvo6gv7lt3xe4db6w7md31jpudfot0o0dx4js7473m3lidy2do6y8deirbb6kpfdngp37g5qeyix5k8i8ks54tcfveg8871kkoqdi8ygq1zcfy4jpseifkiypkrc6rynx7fxt9j4jwdzfdqpgu3s7jm91ap9qu4vurav4drq2n8ss20htmdulye9oo8x6vhqyng5ern8zahtgivlqkkdpwo96kdo8q6raxjzu66z739cpzot7zqo6bvf 00:08:50.291 10:14:03 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:50.291 [2024-07-26 10:14:03.666357] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:50.291 [2024-07-26 10:14:03.666457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:08:50.550 [2024-07-26 10:14:03.799624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.550 [2024-07-26 10:14:03.909791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.685  Copying: 511/511 [MB] (average 1279 MBps) 00:08:51.685 00:08:51.685 10:14:04 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:51.685 10:14:04 -- dd/uring.sh@54 -- # gen_conf 00:08:51.685 10:14:04 -- dd/common.sh@31 -- # xtrace_disable 00:08:51.685 10:14:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.685 [2024-07-26 10:14:05.032799] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:51.685 [2024-07-26 10:14:05.032928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70729 ] 00:08:51.685 { 00:08:51.685 "subsystems": [ 00:08:51.685 { 00:08:51.685 "subsystem": "bdev", 00:08:51.685 "config": [ 00:08:51.685 { 00:08:51.685 "params": { 00:08:51.685 "block_size": 512, 00:08:51.685 "num_blocks": 1048576, 00:08:51.685 "name": "malloc0" 00:08:51.685 }, 00:08:51.685 "method": "bdev_malloc_create" 00:08:51.685 }, 00:08:51.685 { 00:08:51.685 "params": { 00:08:51.685 "filename": "/dev/zram1", 00:08:51.685 "name": "uring0" 00:08:51.685 }, 00:08:51.685 "method": "bdev_uring_create" 00:08:51.685 }, 00:08:51.685 { 00:08:51.685 "method": "bdev_wait_for_examine" 00:08:51.685 } 00:08:51.685 ] 00:08:51.685 } 00:08:51.685 ] 00:08:51.685 } 00:08:51.944 [2024-07-26 10:14:05.171826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.944 [2024-07-26 10:14:05.276846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.079  Copying: 207/512 [MB] (207 MBps) Copying: 421/512 [MB] (214 MBps) Copying: 512/512 [MB] (average 210 MBps) 00:08:55.079 00:08:55.079 10:14:08 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:55.079 10:14:08 -- dd/uring.sh@60 -- # gen_conf 00:08:55.080 10:14:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:55.080 10:14:08 -- common/autotest_common.sh@10 -- # set +x 00:08:55.080 [2024-07-26 10:14:08.455790] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:55.080 [2024-07-26 10:14:08.456175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70772 ] 00:08:55.080 { 00:08:55.080 "subsystems": [ 00:08:55.080 { 00:08:55.080 "subsystem": "bdev", 00:08:55.080 "config": [ 00:08:55.080 { 00:08:55.080 "params": { 00:08:55.080 "block_size": 512, 00:08:55.080 "num_blocks": 1048576, 00:08:55.080 "name": "malloc0" 00:08:55.080 }, 00:08:55.080 "method": "bdev_malloc_create" 00:08:55.080 }, 00:08:55.080 { 00:08:55.080 "params": { 00:08:55.080 "filename": "/dev/zram1", 00:08:55.080 "name": "uring0" 00:08:55.080 }, 00:08:55.080 "method": "bdev_uring_create" 00:08:55.080 }, 00:08:55.080 { 00:08:55.080 "method": "bdev_wait_for_examine" 00:08:55.080 } 00:08:55.080 ] 00:08:55.080 } 00:08:55.080 ] 00:08:55.080 } 00:08:55.339 [2024-07-26 10:14:08.592739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.339 [2024-07-26 10:14:08.697669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.817  Copying: 148/512 [MB] (148 MBps) Copying: 289/512 [MB] (140 MBps) Copying: 429/512 [MB] (140 MBps) Copying: 512/512 [MB] (average 141 MBps) 00:08:59.817 00:08:59.817 10:14:13 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:59.818 10:14:13 -- dd/uring.sh@66 -- # [[ yg4ck1ca64gq8rmv0x74fp8eu8hwxttvdtqkl2jtapgehhnz2hpldjlb9x7eltl118gjdcckkt3mvqucc3j5gkz6rallkola7dyfw66wm952nd24c5veod48dlgq7j5ejksaylnb8r02lw5qb72srwojjubuk6w9a4miyjzbc997ciz5718hh7sxdhud0qz0uvvmq9zn9rk6qe7z0kbm3y3shu1cnd3wv774z13fuzqeivwxvyy9dppu4cppxqp0nntuji45xhrnfwkwsihaf5y1x86q8qculjr5coajf5r35zqlkykh723a6wv7zjz1gb9yijw1uxtl1o9whiycjl6n423kep9b4vxotj0p3i6j5kdoiq45oole7j66h4sdx8avenu50bxfka5zhanbn4hqul0apomml1lkpr031r5vp6lxle48f3eqyguxm8myp83adksxsforl437mgvg4c73rj8ok1rvu678jy3g6930db58fxv3szu938pt8k7a3a42gehddkxvf1r1ybrex4wbigyvptg507sjdg52e8hgv2rncfdamr1mp9ko3jdvk5ok9hfhfphdp7skgvjix5yo3qp8at3m954ymevvhxktx295u648b0utpbphsrhqb9w5yerm0akxwgxnupenslemewiyf86mcs9gb6pndyux5q9tnkactn3885bb2v30ocj5s5dwpvddu3m7f0nnjlvmnqkmoaiwrh7b3tbb4ybvvswnxkizzdda09q3oq4r0hrn2qyotvo6gv7lt3xe4db6w7md31jpudfot0o0dx4js7473m3lidy2do6y8deirbb6kpfdngp37g5qeyix5k8i8ks54tcfveg8871kkoqdi8ygq1zcfy4jpseifkiypkrc6rynx7fxt9j4jwdzfdqpgu3s7jm91ap9qu4vurav4drq2n8ss20htmdulye9oo8x6vhqyng5ern8zahtgivlqkkdpwo96kdo8q6raxjzu66z739cpzot7zqo6bvf == \y\g\4\c\k\1\c\a\6\4\g\q\8\r\m\v\0\x\7\4\f\p\8\e\u\8\h\w\x\t\t\v\d\t\q\k\l\2\j\t\a\p\g\e\h\h\n\z\2\h\p\l\d\j\l\b\9\x\7\e\l\t\l\1\1\8\g\j\d\c\c\k\k\t\3\m\v\q\u\c\c\3\j\5\g\k\z\6\r\a\l\l\k\o\l\a\7\d\y\f\w\6\6\w\m\9\5\2\n\d\2\4\c\5\v\e\o\d\4\8\d\l\g\q\7\j\5\e\j\k\s\a\y\l\n\b\8\r\0\2\l\w\5\q\b\7\2\s\r\w\o\j\j\u\b\u\k\6\w\9\a\4\m\i\y\j\z\b\c\9\9\7\c\i\z\5\7\1\8\h\h\7\s\x\d\h\u\d\0\q\z\0\u\v\v\m\q\9\z\n\9\r\k\6\q\e\7\z\0\k\b\m\3\y\3\s\h\u\1\c\n\d\3\w\v\7\7\4\z\1\3\f\u\z\q\e\i\v\w\x\v\y\y\9\d\p\p\u\4\c\p\p\x\q\p\0\n\n\t\u\j\i\4\5\x\h\r\n\f\w\k\w\s\i\h\a\f\5\y\1\x\8\6\q\8\q\c\u\l\j\r\5\c\o\a\j\f\5\r\3\5\z\q\l\k\y\k\h\7\2\3\a\6\w\v\7\z\j\z\1\g\b\9\y\i\j\w\1\u\x\t\l\1\o\9\w\h\i\y\c\j\l\6\n\4\2\3\k\e\p\9\b\4\v\x\o\t\j\0\p\3\i\6\j\5\k\d\o\i\q\4\5\o\o\l\e\7\j\6\6\h\4\s\d\x\8\a\v\e\n\u\5\0\b\x\f\k\a\5\z\h\a\n\b\n\4\h\q\u\l\0\a\p\o\m\m\l\1\l\k\p\r\0\3\1\r\5\v\p\6\l\x\l\e\4\8\f\3\e\q\y\g\u\x\m\8\m\y\p\8\3\a\d\k\s\x\s\f\o\r\l\4\3\7\m\g\v\g\4\c\7\3\r\j\8\o\k\1\r\v\u\6\7\8\j\y\3\g\6\9\3\0\d\b\5\8\f\x\v\3\s\z\u\9\3\8\p\t\8\k\7\a\3\a\4\2\g\e\h\d\d\k\x\v\f\1\r\1\y\b\r\e\x\4\w\b\i\g\y\v\p\t\g\5\0\7\s\j\d\g\5\2\e\8\h\g\v\2\r\n\c\f\d\a\m\r\1\m\p\9\k\o\3\j\d\v\k\5\o\k\9\h\f\h\f\p\h\d\p\7\s\k\g\v\j\i\x\5\y\o\3\q\p\8\a\t\3\m\9\5\4\y\m\e\v\v\h\x\k\t\x\2\9\5\u\6\4\8\b\0\u\t\p\b\p\h\s\r\h\q\b\9\w\5\y\e\r\m\0\a\k\x\w\g\x\n\u\p\e\n\s\l\e\m\e\w\i\y\f\8\6\m\c\s\9\g\b\6\p\n\d\y\u\x\5\q\9\t\n\k\a\c\t\n\3\8\8\5\b\b\2\v\3\0\o\c\j\5\s\5\d\w\p\v\d\d\u\3\m\7\f\0\n\n\j\l\v\m\n\q\k\m\o\a\i\w\r\h\7\b\3\t\b\b\4\y\b\v\v\s\w\n\x\k\i\z\z\d\d\a\0\9\q\3\o\q\4\r\0\h\r\n\2\q\y\o\t\v\o\6\g\v\7\l\t\3\x\e\4\d\b\6\w\7\m\d\3\1\j\p\u\d\f\o\t\0\o\0\d\x\4\j\s\7\4\7\3\m\3\l\i\d\y\2\d\o\6\y\8\d\e\i\r\b\b\6\k\p\f\d\n\g\p\3\7\g\5\q\e\y\i\x\5\k\8\i\8\k\s\5\4\t\c\f\v\e\g\8\8\7\1\k\k\o\q\d\i\8\y\g\q\1\z\c\f\y\4\j\p\s\e\i\f\k\i\y\p\k\r\c\6\r\y\n\x\7\f\x\t\9\j\4\j\w\d\z\f\d\q\p\g\u\3\s\7\j\m\9\1\a\p\9\q\u\4\v\u\r\a\v\4\d\r\q\2\n\8\s\s\2\0\h\t\m\d\u\l\y\e\9\o\o\8\x\6\v\h\q\y\n\g\5\e\r\n\8\z\a\h\t\g\i\v\l\q\k\k\d\p\w\o\9\6\k\d\o\8\q\6\r\a\x\j\z\u\6\6\z\7\3\9\c\p\z\o\t\7\z\q\o\6\b\v\f ]] 00:08:59.818 10:14:13 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:59.818 10:14:13 -- dd/uring.sh@69 -- # [[ yg4ck1ca64gq8rmv0x74fp8eu8hwxttvdtqkl2jtapgehhnz2hpldjlb9x7eltl118gjdcckkt3mvqucc3j5gkz6rallkola7dyfw66wm952nd24c5veod48dlgq7j5ejksaylnb8r02lw5qb72srwojjubuk6w9a4miyjzbc997ciz5718hh7sxdhud0qz0uvvmq9zn9rk6qe7z0kbm3y3shu1cnd3wv774z13fuzqeivwxvyy9dppu4cppxqp0nntuji45xhrnfwkwsihaf5y1x86q8qculjr5coajf5r35zqlkykh723a6wv7zjz1gb9yijw1uxtl1o9whiycjl6n423kep9b4vxotj0p3i6j5kdoiq45oole7j66h4sdx8avenu50bxfka5zhanbn4hqul0apomml1lkpr031r5vp6lxle48f3eqyguxm8myp83adksxsforl437mgvg4c73rj8ok1rvu678jy3g6930db58fxv3szu938pt8k7a3a42gehddkxvf1r1ybrex4wbigyvptg507sjdg52e8hgv2rncfdamr1mp9ko3jdvk5ok9hfhfphdp7skgvjix5yo3qp8at3m954ymevvhxktx295u648b0utpbphsrhqb9w5yerm0akxwgxnupenslemewiyf86mcs9gb6pndyux5q9tnkactn3885bb2v30ocj5s5dwpvddu3m7f0nnjlvmnqkmoaiwrh7b3tbb4ybvvswnxkizzdda09q3oq4r0hrn2qyotvo6gv7lt3xe4db6w7md31jpudfot0o0dx4js7473m3lidy2do6y8deirbb6kpfdngp37g5qeyix5k8i8ks54tcfveg8871kkoqdi8ygq1zcfy4jpseifkiypkrc6rynx7fxt9j4jwdzfdqpgu3s7jm91ap9qu4vurav4drq2n8ss20htmdulye9oo8x6vhqyng5ern8zahtgivlqkkdpwo96kdo8q6raxjzu66z739cpzot7zqo6bvf == \y\g\4\c\k\1\c\a\6\4\g\q\8\r\m\v\0\x\7\4\f\p\8\e\u\8\h\w\x\t\t\v\d\t\q\k\l\2\j\t\a\p\g\e\h\h\n\z\2\h\p\l\d\j\l\b\9\x\7\e\l\t\l\1\1\8\g\j\d\c\c\k\k\t\3\m\v\q\u\c\c\3\j\5\g\k\z\6\r\a\l\l\k\o\l\a\7\d\y\f\w\6\6\w\m\9\5\2\n\d\2\4\c\5\v\e\o\d\4\8\d\l\g\q\7\j\5\e\j\k\s\a\y\l\n\b\8\r\0\2\l\w\5\q\b\7\2\s\r\w\o\j\j\u\b\u\k\6\w\9\a\4\m\i\y\j\z\b\c\9\9\7\c\i\z\5\7\1\8\h\h\7\s\x\d\h\u\d\0\q\z\0\u\v\v\m\q\9\z\n\9\r\k\6\q\e\7\z\0\k\b\m\3\y\3\s\h\u\1\c\n\d\3\w\v\7\7\4\z\1\3\f\u\z\q\e\i\v\w\x\v\y\y\9\d\p\p\u\4\c\p\p\x\q\p\0\n\n\t\u\j\i\4\5\x\h\r\n\f\w\k\w\s\i\h\a\f\5\y\1\x\8\6\q\8\q\c\u\l\j\r\5\c\o\a\j\f\5\r\3\5\z\q\l\k\y\k\h\7\2\3\a\6\w\v\7\z\j\z\1\g\b\9\y\i\j\w\1\u\x\t\l\1\o\9\w\h\i\y\c\j\l\6\n\4\2\3\k\e\p\9\b\4\v\x\o\t\j\0\p\3\i\6\j\5\k\d\o\i\q\4\5\o\o\l\e\7\j\6\6\h\4\s\d\x\8\a\v\e\n\u\5\0\b\x\f\k\a\5\z\h\a\n\b\n\4\h\q\u\l\0\a\p\o\m\m\l\1\l\k\p\r\0\3\1\r\5\v\p\6\l\x\l\e\4\8\f\3\e\q\y\g\u\x\m\8\m\y\p\8\3\a\d\k\s\x\s\f\o\r\l\4\3\7\m\g\v\g\4\c\7\3\r\j\8\o\k\1\r\v\u\6\7\8\j\y\3\g\6\9\3\0\d\b\5\8\f\x\v\3\s\z\u\9\3\8\p\t\8\k\7\a\3\a\4\2\g\e\h\d\d\k\x\v\f\1\r\1\y\b\r\e\x\4\w\b\i\g\y\v\p\t\g\5\0\7\s\j\d\g\5\2\e\8\h\g\v\2\r\n\c\f\d\a\m\r\1\m\p\9\k\o\3\j\d\v\k\5\o\k\9\h\f\h\f\p\h\d\p\7\s\k\g\v\j\i\x\5\y\o\3\q\p\8\a\t\3\m\9\5\4\y\m\e\v\v\h\x\k\t\x\2\9\5\u\6\4\8\b\0\u\t\p\b\p\h\s\r\h\q\b\9\w\5\y\e\r\m\0\a\k\x\w\g\x\n\u\p\e\n\s\l\e\m\e\w\i\y\f\8\6\m\c\s\9\g\b\6\p\n\d\y\u\x\5\q\9\t\n\k\a\c\t\n\3\8\8\5\b\b\2\v\3\0\o\c\j\5\s\5\d\w\p\v\d\d\u\3\m\7\f\0\n\n\j\l\v\m\n\q\k\m\o\a\i\w\r\h\7\b\3\t\b\b\4\y\b\v\v\s\w\n\x\k\i\z\z\d\d\a\0\9\q\3\o\q\4\r\0\h\r\n\2\q\y\o\t\v\o\6\g\v\7\l\t\3\x\e\4\d\b\6\w\7\m\d\3\1\j\p\u\d\f\o\t\0\o\0\d\x\4\j\s\7\4\7\3\m\3\l\i\d\y\2\d\o\6\y\8\d\e\i\r\b\b\6\k\p\f\d\n\g\p\3\7\g\5\q\e\y\i\x\5\k\8\i\8\k\s\5\4\t\c\f\v\e\g\8\8\7\1\k\k\o\q\d\i\8\y\g\q\1\z\c\f\y\4\j\p\s\e\i\f\k\i\y\p\k\r\c\6\r\y\n\x\7\f\x\t\9\j\4\j\w\d\z\f\d\q\p\g\u\3\s\7\j\m\9\1\a\p\9\q\u\4\v\u\r\a\v\4\d\r\q\2\n\8\s\s\2\0\h\t\m\d\u\l\y\e\9\o\o\8\x\6\v\h\q\y\n\g\5\e\r\n\8\z\a\h\t\g\i\v\l\q\k\k\d\p\w\o\9\6\k\d\o\8\q\6\r\a\x\j\z\u\6\6\z\7\3\9\c\p\z\o\t\7\z\q\o\6\b\v\f ]] 00:08:59.818 10:14:13 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:00.076 10:14:13 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:00.076 10:14:13 -- dd/uring.sh@75 -- # gen_conf 00:09:00.076 10:14:13 -- dd/common.sh@31 -- # xtrace_disable 00:09:00.076 10:14:13 -- common/autotest_common.sh@10 -- # set +x 00:09:00.076 [2024-07-26 10:14:13.496597] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:00.076 [2024-07-26 10:14:13.496712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:09:00.076 { 00:09:00.076 "subsystems": [ 00:09:00.076 { 00:09:00.076 "subsystem": "bdev", 00:09:00.076 "config": [ 00:09:00.076 { 00:09:00.076 "params": { 00:09:00.076 "block_size": 512, 00:09:00.076 "num_blocks": 1048576, 00:09:00.076 "name": "malloc0" 00:09:00.076 }, 00:09:00.076 "method": "bdev_malloc_create" 00:09:00.076 }, 00:09:00.076 { 00:09:00.076 "params": { 00:09:00.076 "filename": "/dev/zram1", 00:09:00.076 "name": "uring0" 00:09:00.076 }, 00:09:00.076 "method": "bdev_uring_create" 00:09:00.076 }, 00:09:00.076 { 00:09:00.076 "method": "bdev_wait_for_examine" 00:09:00.076 } 00:09:00.076 ] 00:09:00.076 } 00:09:00.076 ] 00:09:00.076 } 00:09:00.334 [2024-07-26 10:14:13.635623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.334 [2024-07-26 10:14:13.735512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.405  Copying: 152/512 [MB] (152 MBps) Copying: 308/512 [MB] (155 MBps) Copying: 463/512 [MB] (154 MBps) Copying: 512/512 [MB] (average 154 MBps) 00:09:04.405 00:09:04.405 10:14:17 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:04.405 10:14:17 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:04.405 10:14:17 -- dd/uring.sh@87 -- # : 00:09:04.405 10:14:17 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:04.405 10:14:17 -- dd/uring.sh@87 -- # : 00:09:04.405 10:14:17 -- dd/uring.sh@87 -- # gen_conf 00:09:04.405 10:14:17 -- dd/common.sh@31 -- # xtrace_disable 00:09:04.405 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:09:04.405 [2024-07-26 10:14:17.779225] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:04.405 [2024-07-26 10:14:17.779334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70904 ] 00:09:04.405 { 00:09:04.405 "subsystems": [ 00:09:04.405 { 00:09:04.405 "subsystem": "bdev", 00:09:04.405 "config": [ 00:09:04.405 { 00:09:04.405 "params": { 00:09:04.405 "block_size": 512, 00:09:04.405 "num_blocks": 1048576, 00:09:04.405 "name": "malloc0" 00:09:04.405 }, 00:09:04.405 "method": "bdev_malloc_create" 00:09:04.405 }, 00:09:04.405 { 00:09:04.405 "params": { 00:09:04.405 "filename": "/dev/zram1", 00:09:04.405 "name": "uring0" 00:09:04.405 }, 00:09:04.405 "method": "bdev_uring_create" 00:09:04.405 }, 00:09:04.405 { 00:09:04.405 "params": { 00:09:04.405 "name": "uring0" 00:09:04.405 }, 00:09:04.405 "method": "bdev_uring_delete" 00:09:04.405 }, 00:09:04.405 { 00:09:04.405 "method": "bdev_wait_for_examine" 00:09:04.405 } 00:09:04.405 ] 00:09:04.405 } 00:09:04.405 ] 00:09:04.405 } 00:09:04.664 [2024-07-26 10:14:18.030040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.922 [2024-07-26 10:14:18.130604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.439  Copying: 0/0 [B] (average 0 Bps) 00:09:05.439 00:09:05.439 10:14:18 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:05.439 10:14:18 -- common/autotest_common.sh@640 -- # local es=0 00:09:05.439 10:14:18 -- dd/uring.sh@94 -- # : 00:09:05.439 10:14:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:05.439 10:14:18 -- dd/uring.sh@94 -- # gen_conf 00:09:05.439 10:14:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.439 10:14:18 -- dd/common.sh@31 -- # xtrace_disable 00:09:05.439 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:09:05.439 10:14:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.439 10:14:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.439 10:14:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.439 10:14:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.439 10:14:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.439 10:14:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.439 10:14:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.439 10:14:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:05.439 [2024-07-26 10:14:18.870976] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:05.439 [2024-07-26 10:14:18.871105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:09:05.439 { 00:09:05.439 "subsystems": [ 00:09:05.439 { 00:09:05.439 "subsystem": "bdev", 00:09:05.439 "config": [ 00:09:05.439 { 00:09:05.439 "params": { 00:09:05.439 "block_size": 512, 00:09:05.439 "num_blocks": 1048576, 00:09:05.439 "name": "malloc0" 00:09:05.439 }, 00:09:05.439 "method": "bdev_malloc_create" 00:09:05.439 }, 00:09:05.439 { 00:09:05.439 "params": { 00:09:05.439 "filename": "/dev/zram1", 00:09:05.439 "name": "uring0" 00:09:05.439 }, 00:09:05.439 "method": "bdev_uring_create" 00:09:05.439 }, 00:09:05.439 { 00:09:05.439 "params": { 00:09:05.439 "name": "uring0" 00:09:05.439 }, 00:09:05.439 "method": "bdev_uring_delete" 00:09:05.439 }, 00:09:05.439 { 00:09:05.439 "method": "bdev_wait_for_examine" 00:09:05.439 } 00:09:05.439 ] 00:09:05.439 } 00:09:05.439 ] 00:09:05.439 } 00:09:05.698 [2024-07-26 10:14:19.003843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.698 [2024-07-26 10:14:19.108168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.956 [2024-07-26 10:14:19.380112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:05.956 [2024-07-26 10:14:19.380188] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:05.956 [2024-07-26 10:14:19.380202] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:09:05.956 [2024-07-26 10:14:19.380214] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.275 [2024-07-26 10:14:19.703178] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:06.533 10:14:19 -- common/autotest_common.sh@643 -- # es=237 00:09:06.533 10:14:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:06.533 10:14:19 -- common/autotest_common.sh@652 -- # es=109 00:09:06.533 10:14:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:06.533 10:14:19 -- common/autotest_common.sh@660 -- # es=1 00:09:06.533 10:14:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:06.533 10:14:19 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:06.533 10:14:19 -- dd/common.sh@172 -- # local id=1 00:09:06.533 10:14:19 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:06.533 10:14:19 -- dd/common.sh@176 -- # echo 1 00:09:06.533 10:14:19 -- dd/common.sh@177 -- # echo 1 00:09:06.533 10:14:19 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:06.791 00:09:06.791 real 0m16.474s 00:09:06.791 user 0m9.459s 00:09:06.791 sys 0m6.214s 00:09:06.791 10:14:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.791 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:09:06.791 ************************************ 00:09:06.791 END TEST dd_uring_copy 00:09:06.791 ************************************ 00:09:06.791 00:09:06.791 real 0m16.609s 00:09:06.791 user 0m9.507s 00:09:06.791 sys 0m6.300s 00:09:06.791 10:14:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.791 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:09:06.791 ************************************ 00:09:06.791 END TEST spdk_dd_uring 00:09:06.791 ************************************ 00:09:06.791 10:14:20 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:06.791 10:14:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.791 10:14:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.791 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:09:06.791 ************************************ 00:09:06.791 START TEST spdk_dd_sparse 00:09:06.791 ************************************ 00:09:06.791 10:14:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:06.791 * Looking for test storage... 00:09:06.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:06.791 10:14:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.791 10:14:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.791 10:14:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.791 10:14:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.791 10:14:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.791 10:14:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.791 10:14:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.791 10:14:20 -- paths/export.sh@5 -- # export PATH 00:09:06.791 10:14:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.791 10:14:20 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:06.791 10:14:20 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:06.791 10:14:20 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:06.791 10:14:20 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:06.791 10:14:20 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:07.050 10:14:20 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:07.050 10:14:20 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:07.050 10:14:20 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:07.050 10:14:20 -- dd/sparse.sh@118 -- # prepare 00:09:07.050 10:14:20 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:07.050 10:14:20 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:07.050 1+0 records in 00:09:07.050 1+0 records out 00:09:07.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00498783 s, 841 MB/s 00:09:07.050 10:14:20 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:07.050 1+0 records in 00:09:07.050 1+0 records out 00:09:07.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00643291 s, 652 MB/s 00:09:07.050 10:14:20 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:07.050 1+0 records in 00:09:07.050 1+0 records out 00:09:07.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00699894 s, 599 MB/s 00:09:07.050 10:14:20 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:07.050 10:14:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.050 10:14:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.050 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.050 ************************************ 00:09:07.050 START TEST dd_sparse_file_to_file 00:09:07.050 ************************************ 00:09:07.050 10:14:20 -- common/autotest_common.sh@1104 -- # file_to_file 00:09:07.050 10:14:20 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:07.050 10:14:20 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:07.050 10:14:20 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:07.050 10:14:20 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:07.050 10:14:20 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:07.050 10:14:20 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:07.050 10:14:20 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:07.050 10:14:20 -- dd/sparse.sh@41 -- # gen_conf 00:09:07.050 10:14:20 -- dd/common.sh@31 -- # xtrace_disable 00:09:07.050 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.050 [2024-07-26 10:14:20.334804] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:07.050 [2024-07-26 10:14:20.334903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71032 ] 00:09:07.050 { 00:09:07.050 "subsystems": [ 00:09:07.050 { 00:09:07.050 "subsystem": "bdev", 00:09:07.050 "config": [ 00:09:07.050 { 00:09:07.050 "params": { 00:09:07.050 "block_size": 4096, 00:09:07.050 "filename": "dd_sparse_aio_disk", 00:09:07.050 "name": "dd_aio" 00:09:07.050 }, 00:09:07.050 "method": "bdev_aio_create" 00:09:07.050 }, 00:09:07.050 { 00:09:07.050 "params": { 00:09:07.050 "lvs_name": "dd_lvstore", 00:09:07.050 "bdev_name": "dd_aio" 00:09:07.050 }, 00:09:07.050 "method": "bdev_lvol_create_lvstore" 00:09:07.050 }, 00:09:07.050 { 00:09:07.050 "method": "bdev_wait_for_examine" 00:09:07.050 } 00:09:07.050 ] 00:09:07.050 } 00:09:07.050 ] 00:09:07.050 } 00:09:07.050 [2024-07-26 10:14:20.472158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.309 [2024-07-26 10:14:20.572425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.567  Copying: 12/36 [MB] (average 1200 MBps) 00:09:07.567 00:09:07.567 10:14:20 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:07.567 10:14:20 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:07.567 10:14:20 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:07.567 10:14:20 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:07.567 10:14:20 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:07.567 10:14:21 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:07.567 10:14:21 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:07.567 10:14:21 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:07.567 10:14:21 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:07.567 10:14:21 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:07.567 00:09:07.567 real 0m0.726s 00:09:07.567 user 0m0.434s 00:09:07.567 sys 0m0.197s 00:09:07.567 10:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.567 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.567 ************************************ 00:09:07.567 END TEST dd_sparse_file_to_file 00:09:07.567 ************************************ 00:09:07.826 10:14:21 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:07.826 10:14:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.826 10:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.826 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.826 ************************************ 00:09:07.826 START TEST dd_sparse_file_to_bdev 00:09:07.826 ************************************ 00:09:07.826 10:14:21 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:09:07.826 10:14:21 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:07.826 10:14:21 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:07.826 10:14:21 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:07.826 10:14:21 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:07.826 10:14:21 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:07.826 10:14:21 -- dd/sparse.sh@73 -- # gen_conf 00:09:07.826 10:14:21 -- dd/common.sh@31 -- # xtrace_disable 00:09:07.826 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.826 [2024-07-26 10:14:21.112634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:07.826 [2024-07-26 10:14:21.112738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71074 ] 00:09:07.826 { 00:09:07.826 "subsystems": [ 00:09:07.826 { 00:09:07.826 "subsystem": "bdev", 00:09:07.826 "config": [ 00:09:07.826 { 00:09:07.826 "params": { 00:09:07.826 "block_size": 4096, 00:09:07.826 "filename": "dd_sparse_aio_disk", 00:09:07.826 "name": "dd_aio" 00:09:07.826 }, 00:09:07.826 "method": "bdev_aio_create" 00:09:07.826 }, 00:09:07.826 { 00:09:07.826 "params": { 00:09:07.826 "lvs_name": "dd_lvstore", 00:09:07.826 "lvol_name": "dd_lvol", 00:09:07.826 "size": 37748736, 00:09:07.826 "thin_provision": true 00:09:07.826 }, 00:09:07.826 "method": "bdev_lvol_create" 00:09:07.826 }, 00:09:07.826 { 00:09:07.826 "method": "bdev_wait_for_examine" 00:09:07.826 } 00:09:07.827 ] 00:09:07.827 } 00:09:07.827 ] 00:09:07.827 } 00:09:07.827 [2024-07-26 10:14:21.251213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.085 [2024-07-26 10:14:21.369310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.085 [2024-07-26 10:14:21.479302] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:08.085  Copying: 12/36 [MB] (average 480 MBps)[2024-07-26 10:14:21.523207] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:08.652 00:09:08.652 00:09:08.652 00:09:08.652 real 0m0.735s 00:09:08.652 user 0m0.487s 00:09:08.652 sys 0m0.172s 00:09:08.652 10:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.652 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.652 ************************************ 00:09:08.652 END TEST dd_sparse_file_to_bdev 00:09:08.652 ************************************ 00:09:08.652 10:14:21 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:08.652 10:14:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.652 10:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.652 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.652 ************************************ 00:09:08.652 START TEST dd_sparse_bdev_to_file 00:09:08.652 ************************************ 00:09:08.652 10:14:21 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:09:08.652 10:14:21 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:08.652 10:14:21 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:08.652 10:14:21 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:08.652 10:14:21 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:08.652 10:14:21 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:08.652 10:14:21 -- dd/sparse.sh@91 -- # gen_conf 00:09:08.652 10:14:21 -- dd/common.sh@31 -- # xtrace_disable 00:09:08.652 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.652 [2024-07-26 10:14:21.911250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:08.652 [2024-07-26 10:14:21.911340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71105 ] 00:09:08.652 { 00:09:08.652 "subsystems": [ 00:09:08.652 { 00:09:08.652 "subsystem": "bdev", 00:09:08.652 "config": [ 00:09:08.652 { 00:09:08.652 "params": { 00:09:08.652 "block_size": 4096, 00:09:08.652 "filename": "dd_sparse_aio_disk", 00:09:08.652 "name": "dd_aio" 00:09:08.652 }, 00:09:08.652 "method": "bdev_aio_create" 00:09:08.652 }, 00:09:08.652 { 00:09:08.652 "method": "bdev_wait_for_examine" 00:09:08.652 } 00:09:08.652 ] 00:09:08.652 } 00:09:08.652 ] 00:09:08.652 } 00:09:08.652 [2024-07-26 10:14:22.047900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.910 [2024-07-26 10:14:22.153413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.168  Copying: 12/36 [MB] (average 1000 MBps) 00:09:09.168 00:09:09.168 10:14:22 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:09.168 10:14:22 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:09.168 10:14:22 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:09.168 10:14:22 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:09.168 10:14:22 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:09.168 10:14:22 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:09.168 10:14:22 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:09.168 10:14:22 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:09.168 10:14:22 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:09.168 10:14:22 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:09.168 00:09:09.168 real 0m0.721s 00:09:09.168 user 0m0.443s 00:09:09.168 sys 0m0.197s 00:09:09.168 10:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.168 ************************************ 00:09:09.168 END TEST dd_sparse_bdev_to_file 00:09:09.168 ************************************ 00:09:09.169 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.169 10:14:22 -- dd/sparse.sh@1 -- # cleanup 00:09:09.169 10:14:22 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:09.427 10:14:22 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:09.427 10:14:22 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:09.427 10:14:22 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:09.427 00:09:09.427 real 0m2.482s 00:09:09.427 user 0m1.447s 00:09:09.427 sys 0m0.771s 00:09:09.427 10:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.427 ************************************ 00:09:09.427 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 END TEST spdk_dd_sparse 00:09:09.428 ************************************ 00:09:09.428 10:14:22 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:09.428 10:14:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.428 10:14:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.428 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 ************************************ 00:09:09.428 START TEST spdk_dd_negative 00:09:09.428 ************************************ 00:09:09.428 10:14:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:09.428 * Looking for test storage... 00:09:09.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:09.428 10:14:22 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.428 10:14:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.428 10:14:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.428 10:14:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.428 10:14:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.428 10:14:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.428 10:14:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.428 10:14:22 -- paths/export.sh@5 -- # export PATH 00:09:09.428 10:14:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.428 10:14:22 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.428 10:14:22 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.428 10:14:22 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.428 10:14:22 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.428 10:14:22 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:09.428 10:14:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.428 10:14:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.428 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 ************************************ 00:09:09.428 START TEST dd_invalid_arguments 00:09:09.428 ************************************ 00:09:09.428 10:14:22 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:09:09.428 10:14:22 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:09.428 10:14:22 -- common/autotest_common.sh@640 -- # local es=0 00:09:09.428 10:14:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:09.428 10:14:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.428 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.428 10:14:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.428 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.428 10:14:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.428 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.428 10:14:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.428 10:14:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.428 10:14:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:09.428 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:09.428 options: 00:09:09.428 -c, --config JSON config file (default none) 00:09:09.428 --json JSON config file (default none) 00:09:09.428 --json-ignore-init-errors 00:09:09.428 don't exit on invalid config entry 00:09:09.428 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:09.428 -g, --single-file-segments 00:09:09.428 force creating just one hugetlbfs file 00:09:09.428 -h, --help show this usage 00:09:09.428 -i, --shm-id shared memory ID (optional) 00:09:09.428 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:09.428 --lcores lcore to CPU mapping list. The list is in the format: 00:09:09.428 [<,lcores[@CPUs]>...] 00:09:09.428 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:09.428 Within the group, '-' is used for range separator, 00:09:09.428 ',' is used for single number separator. 00:09:09.428 '( )' can be omitted for single element group, 00:09:09.428 '@' can be omitted if cpus and lcores have the same value 00:09:09.428 -n, --mem-channels channel number of memory channels used for DPDK 00:09:09.428 -p, --main-core main (primary) core for DPDK 00:09:09.428 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:09.428 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:09.428 --disable-cpumask-locks Disable CPU core lock files. 00:09:09.428 --silence-noticelog disable notice level logging to stderr 00:09:09.428 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:09.428 -u, --no-pci disable PCI access 00:09:09.428 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:09.428 --max-delay maximum reactor delay (in microseconds) 00:09:09.428 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:09.428 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:09.428 -R, --huge-unlink unlink huge files after initialization 00:09:09.428 -v, --version print SPDK version 00:09:09.428 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:09.428 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:09.428 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:09.428 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:09.429 Tracepoints vary in size and can use more than one trace entry. 00:09:09.429 --rpcs-allowed comma-separated list of permitted RPCS 00:09:09.429 --env-context Opaque context for use of the env implementation 00:09:09.429 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:09.429 --no-huge run without using hugepages 00:09:09.429 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:09.429 -e, --tpoint-group [:] 00:09:09.429 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:09.429 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:09.429 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:09.429 [2024-07-26 10:14:22.850087] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:09.429 can be combined (e.g. thread,bdev:0x1). 00:09:09.429 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:09.429 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:09.429 [--------- DD Options ---------] 00:09:09.429 --if Input file. Must specify either --if or --ib. 00:09:09.429 --ib Input bdev. Must specifier either --if or --ib 00:09:09.429 --of Output file. Must specify either --of or --ob. 00:09:09.429 --ob Output bdev. Must specify either --of or --ob. 00:09:09.429 --iflag Input file flags. 00:09:09.429 --oflag Output file flags. 00:09:09.429 --bs I/O unit size (default: 4096) 00:09:09.429 --qd Queue depth (default: 2) 00:09:09.429 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:09.429 --skip Skip this many I/O units at start of input. (default: 0) 00:09:09.429 --seek Skip this many I/O units at start of output. (default: 0) 00:09:09.429 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:09.429 --sparse Enable hole skipping in input target 00:09:09.429 Available iflag and oflag values: 00:09:09.429 append - append mode 00:09:09.429 direct - use direct I/O for data 00:09:09.429 directory - fail unless a directory 00:09:09.429 dsync - use synchronized I/O for data 00:09:09.429 noatime - do not update access time 00:09:09.429 noctty - do not assign controlling terminal from file 00:09:09.429 nofollow - do not follow symlinks 00:09:09.429 nonblock - use non-blocking I/O 00:09:09.429 sync - use synchronized I/O for data and metadata 00:09:09.429 10:14:22 -- common/autotest_common.sh@643 -- # es=2 00:09:09.429 10:14:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:09.429 10:14:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:09.429 10:14:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:09.429 00:09:09.429 real 0m0.071s 00:09:09.429 user 0m0.042s 00:09:09.429 sys 0m0.027s 00:09:09.429 10:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.429 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.429 ************************************ 00:09:09.429 END TEST dd_invalid_arguments 00:09:09.429 ************************************ 00:09:09.688 10:14:22 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:09.688 10:14:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.688 10:14:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.688 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 ************************************ 00:09:09.688 START TEST dd_double_input 00:09:09.688 ************************************ 00:09:09.688 10:14:22 -- common/autotest_common.sh@1104 -- # double_input 00:09:09.688 10:14:22 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:09.688 10:14:22 -- common/autotest_common.sh@640 -- # local es=0 00:09:09.688 10:14:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:09.688 10:14:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.688 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.688 10:14:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.688 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.688 10:14:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.688 10:14:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.688 10:14:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.688 10:14:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.688 10:14:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:09.688 [2024-07-26 10:14:22.973921] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:09.688 10:14:22 -- common/autotest_common.sh@643 -- # es=22 00:09:09.688 10:14:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:09.688 10:14:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:09.688 10:14:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:09.688 00:09:09.688 real 0m0.072s 00:09:09.688 user 0m0.048s 00:09:09.688 sys 0m0.022s 00:09:09.688 10:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.688 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 ************************************ 00:09:09.688 END TEST dd_double_input 00:09:09.688 ************************************ 00:09:09.688 10:14:23 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:09.688 10:14:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.688 10:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.688 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.688 ************************************ 00:09:09.688 START TEST dd_double_output 00:09:09.688 ************************************ 00:09:09.688 10:14:23 -- common/autotest_common.sh@1104 -- # double_output 00:09:09.689 10:14:23 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.689 10:14:23 -- common/autotest_common.sh@640 -- # local es=0 00:09:09.689 10:14:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.689 10:14:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.689 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.689 10:14:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.689 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.689 10:14:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.689 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.689 10:14:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.689 10:14:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.689 10:14:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.689 [2024-07-26 10:14:23.095413] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:09.689 10:14:23 -- common/autotest_common.sh@643 -- # es=22 00:09:09.689 10:14:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:09.689 10:14:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:09.689 10:14:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:09.689 00:09:09.689 real 0m0.072s 00:09:09.689 user 0m0.039s 00:09:09.689 sys 0m0.032s 00:09:09.689 10:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.689 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.689 ************************************ 00:09:09.689 END TEST dd_double_output 00:09:09.689 ************************************ 00:09:09.946 10:14:23 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:09.946 10:14:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.946 10:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.946 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.946 ************************************ 00:09:09.946 START TEST dd_no_input 00:09:09.946 ************************************ 00:09:09.946 10:14:23 -- common/autotest_common.sh@1104 -- # no_input 00:09:09.946 10:14:23 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.946 10:14:23 -- common/autotest_common.sh@640 -- # local es=0 00:09:09.946 10:14:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.947 10:14:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.947 10:14:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.947 [2024-07-26 10:14:23.216725] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:09.947 10:14:23 -- common/autotest_common.sh@643 -- # es=22 00:09:09.947 10:14:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:09.947 10:14:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:09.947 10:14:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:09.947 00:09:09.947 real 0m0.070s 00:09:09.947 user 0m0.044s 00:09:09.947 sys 0m0.024s 00:09:09.947 10:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.947 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.947 ************************************ 00:09:09.947 END TEST dd_no_input 00:09:09.947 ************************************ 00:09:09.947 10:14:23 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:09.947 10:14:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.947 10:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.947 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.947 ************************************ 00:09:09.947 START TEST dd_no_output 00:09:09.947 ************************************ 00:09:09.947 10:14:23 -- common/autotest_common.sh@1104 -- # no_output 00:09:09.947 10:14:23 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.947 10:14:23 -- common/autotest_common.sh@640 -- # local es=0 00:09:09.947 10:14:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.947 10:14:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.947 10:14:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.947 10:14:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.947 [2024-07-26 10:14:23.337452] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:09.947 10:14:23 -- common/autotest_common.sh@643 -- # es=22 00:09:09.947 10:14:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:09.947 10:14:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:09.947 10:14:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:09.947 00:09:09.947 real 0m0.069s 00:09:09.947 user 0m0.038s 00:09:09.947 sys 0m0.030s 00:09:09.947 10:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.947 ************************************ 00:09:09.947 END TEST dd_no_output 00:09:09.947 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.947 ************************************ 00:09:09.947 10:14:23 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:09.947 10:14:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.947 10:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.947 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.205 ************************************ 00:09:10.205 START TEST dd_wrong_blocksize 00:09:10.205 ************************************ 00:09:10.205 10:14:23 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:09:10.205 10:14:23 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:10.205 10:14:23 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.205 10:14:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:10.205 10:14:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.205 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.206 10:14:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:10.206 [2024-07-26 10:14:23.464265] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:10.206 10:14:23 -- common/autotest_common.sh@643 -- # es=22 00:09:10.206 10:14:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:10.206 10:14:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:10.206 10:14:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:10.206 00:09:10.206 real 0m0.075s 00:09:10.206 user 0m0.040s 00:09:10.206 sys 0m0.034s 00:09:10.206 10:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.206 ************************************ 00:09:10.206 END TEST dd_wrong_blocksize 00:09:10.206 ************************************ 00:09:10.206 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.206 10:14:23 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:10.206 10:14:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.206 10:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.206 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.206 ************************************ 00:09:10.206 START TEST dd_smaller_blocksize 00:09:10.206 ************************************ 00:09:10.206 10:14:23 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:09:10.206 10:14:23 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:10.206 10:14:23 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.206 10:14:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:10.206 10:14:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.206 10:14:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.206 10:14:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:10.206 [2024-07-26 10:14:23.587359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:10.206 [2024-07-26 10:14:23.587478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71326 ] 00:09:10.465 [2024-07-26 10:14:23.728345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.465 [2024-07-26 10:14:23.838401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.724 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:10.724 [2024-07-26 10:14:23.934427] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:10.724 [2024-07-26 10:14:23.934462] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.724 [2024-07-26 10:14:24.054079] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:10.724 10:14:24 -- common/autotest_common.sh@643 -- # es=244 00:09:10.724 10:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:10.724 10:14:24 -- common/autotest_common.sh@652 -- # es=116 00:09:10.724 10:14:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:10.724 10:14:24 -- common/autotest_common.sh@660 -- # es=1 00:09:10.724 10:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:10.724 00:09:10.724 real 0m0.616s 00:09:10.724 user 0m0.355s 00:09:10.724 sys 0m0.155s 00:09:10.724 10:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.724 ************************************ 00:09:10.724 END TEST dd_smaller_blocksize 00:09:10.724 ************************************ 00:09:10.724 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.983 10:14:24 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:10.983 10:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.983 10:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.983 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.983 ************************************ 00:09:10.983 START TEST dd_invalid_count 00:09:10.983 ************************************ 00:09:10.983 10:14:24 -- common/autotest_common.sh@1104 -- # invalid_count 00:09:10.983 10:14:24 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.983 10:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.983 10:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.983 10:14:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.983 10:14:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.983 10:14:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.983 10:14:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.983 [2024-07-26 10:14:24.247361] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:10.983 10:14:24 -- common/autotest_common.sh@643 -- # es=22 00:09:10.983 10:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:10.983 10:14:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:10.983 10:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:10.983 00:09:10.983 real 0m0.061s 00:09:10.983 user 0m0.033s 00:09:10.983 sys 0m0.027s 00:09:10.983 10:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.983 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.983 ************************************ 00:09:10.983 END TEST dd_invalid_count 00:09:10.983 ************************************ 00:09:10.983 10:14:24 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:10.983 10:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.983 10:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.983 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.983 ************************************ 00:09:10.983 START TEST dd_invalid_oflag 00:09:10.983 ************************************ 00:09:10.983 10:14:24 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:09:10.983 10:14:24 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.983 10:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.983 10:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.983 10:14:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.983 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.983 10:14:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.984 10:14:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.984 10:14:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.984 [2024-07-26 10:14:24.363861] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:10.984 10:14:24 -- common/autotest_common.sh@643 -- # es=22 00:09:10.984 10:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:10.984 10:14:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:10.984 10:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:10.984 00:09:10.984 real 0m0.072s 00:09:10.984 user 0m0.046s 00:09:10.984 sys 0m0.024s 00:09:10.984 10:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.984 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 ************************************ 00:09:10.984 END TEST dd_invalid_oflag 00:09:10.984 ************************************ 00:09:10.984 10:14:24 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:10.984 10:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.984 10:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.984 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 ************************************ 00:09:10.984 START TEST dd_invalid_iflag 00:09:10.984 ************************************ 00:09:10.984 10:14:24 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:09:10.984 10:14:24 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:10.984 10:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.984 10:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:10.984 10:14:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.984 10:14:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.984 10:14:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.984 10:14:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.984 10:14:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.984 10:14:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:11.243 [2024-07-26 10:14:24.479695] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:11.243 10:14:24 -- common/autotest_common.sh@643 -- # es=22 00:09:11.243 10:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:11.243 10:14:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:11.243 10:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:11.243 00:09:11.243 real 0m0.062s 00:09:11.243 user 0m0.035s 00:09:11.243 sys 0m0.026s 00:09:11.243 10:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.243 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.243 ************************************ 00:09:11.243 END TEST dd_invalid_iflag 00:09:11.243 ************************************ 00:09:11.243 10:14:24 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:11.243 10:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.243 10:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.243 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.243 ************************************ 00:09:11.243 START TEST dd_unknown_flag 00:09:11.243 ************************************ 00:09:11.243 10:14:24 -- common/autotest_common.sh@1104 -- # unknown_flag 00:09:11.243 10:14:24 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.243 10:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:09:11.243 10:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.243 10:14:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.243 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.243 10:14:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.243 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.243 10:14:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.243 10:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.243 10:14:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.243 10:14:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.243 10:14:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.243 [2024-07-26 10:14:24.596416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:11.243 [2024-07-26 10:14:24.596554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71417 ] 00:09:11.502 [2024-07-26 10:14:24.734931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.502 [2024-07-26 10:14:24.830478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.502 [2024-07-26 10:14:24.922856] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:11.502 [2024-07-26 10:14:24.922940] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:11.502 [2024-07-26 10:14:24.922970] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:11.502 [2024-07-26 10:14:24.922982] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.760 [2024-07-26 10:14:25.042397] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:11.760 10:14:25 -- common/autotest_common.sh@643 -- # es=236 00:09:11.760 10:14:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:11.760 10:14:25 -- common/autotest_common.sh@652 -- # es=108 00:09:11.760 10:14:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:11.760 10:14:25 -- common/autotest_common.sh@660 -- # es=1 00:09:11.760 10:14:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:11.760 00:09:11.760 real 0m0.593s 00:09:11.760 user 0m0.339s 00:09:11.760 sys 0m0.149s 00:09:11.760 10:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.760 ************************************ 00:09:11.760 END TEST dd_unknown_flag 00:09:11.760 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:11.760 ************************************ 00:09:11.760 10:14:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:11.760 10:14:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.760 10:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.760 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:11.760 ************************************ 00:09:11.760 START TEST dd_invalid_json 00:09:11.760 ************************************ 00:09:11.760 10:14:25 -- common/autotest_common.sh@1104 -- # invalid_json 00:09:11.760 10:14:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:11.760 10:14:25 -- dd/negative_dd.sh@95 -- # : 00:09:11.760 10:14:25 -- common/autotest_common.sh@640 -- # local es=0 00:09:11.760 10:14:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:11.760 10:14:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.760 10:14:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.760 10:14:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.760 10:14:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.760 10:14:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.760 10:14:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:11.760 10:14:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.760 10:14:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.760 10:14:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:12.018 [2024-07-26 10:14:25.229920] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:12.018 [2024-07-26 10:14:25.230029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71446 ] 00:09:12.018 [2024-07-26 10:14:25.360589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.018 [2024-07-26 10:14:25.462377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.018 [2024-07-26 10:14:25.462580] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:12.018 [2024-07-26 10:14:25.462616] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.018 [2024-07-26 10:14:25.462663] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:12.277 10:14:25 -- common/autotest_common.sh@643 -- # es=234 00:09:12.277 10:14:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:12.277 10:14:25 -- common/autotest_common.sh@652 -- # es=106 00:09:12.277 10:14:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:12.277 10:14:25 -- common/autotest_common.sh@660 -- # es=1 00:09:12.277 10:14:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:12.277 00:09:12.277 real 0m0.391s 00:09:12.277 user 0m0.229s 00:09:12.277 sys 0m0.060s 00:09:12.277 10:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.277 ************************************ 00:09:12.277 END TEST dd_invalid_json 00:09:12.277 ************************************ 00:09:12.277 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.277 00:09:12.277 real 0m2.919s 00:09:12.277 user 0m1.526s 00:09:12.277 sys 0m1.034s 00:09:12.277 10:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.277 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.277 ************************************ 00:09:12.277 END TEST spdk_dd_negative 00:09:12.277 ************************************ 00:09:12.277 00:09:12.277 real 1m20.524s 00:09:12.277 user 0m50.593s 00:09:12.277 sys 0m20.566s 00:09:12.277 10:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.277 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.277 ************************************ 00:09:12.277 END TEST spdk_dd 00:09:12.277 ************************************ 00:09:12.277 10:14:25 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:09:12.277 10:14:25 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:09:12.277 10:14:25 -- spdk/autotest.sh@268 -- # timing_exit lib 00:09:12.277 10:14:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:12.277 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.536 10:14:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:12.536 10:14:25 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:09:12.536 10:14:25 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:09:12.536 10:14:25 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:09:12.536 10:14:25 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:09:12.536 10:14:25 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:09:12.536 10:14:25 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:12.536 10:14:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:12.536 10:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.536 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.536 ************************************ 00:09:12.536 START TEST nvmf_tcp 00:09:12.536 ************************************ 00:09:12.536 10:14:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:12.536 * Looking for test storage... 00:09:12.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.536 10:14:25 -- nvmf/common.sh@7 -- # uname -s 00:09:12.536 10:14:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.536 10:14:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.536 10:14:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.536 10:14:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.536 10:14:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.536 10:14:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.536 10:14:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.536 10:14:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.536 10:14:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.536 10:14:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.536 10:14:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:09:12.536 10:14:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:09:12.536 10:14:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.536 10:14:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.536 10:14:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.536 10:14:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.536 10:14:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.536 10:14:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.536 10:14:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.536 10:14:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.536 10:14:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.536 10:14:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.536 10:14:25 -- paths/export.sh@5 -- # export PATH 00:09:12.536 10:14:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.536 10:14:25 -- nvmf/common.sh@46 -- # : 0 00:09:12.536 10:14:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:12.536 10:14:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:12.536 10:14:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:12.536 10:14:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.536 10:14:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.536 10:14:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:12.536 10:14:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:12.536 10:14:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:12.536 10:14:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:12.536 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:12.536 10:14:25 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:12.536 10:14:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:12.536 10:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.536 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.536 ************************************ 00:09:12.536 START TEST nvmf_host_management 00:09:12.536 ************************************ 00:09:12.536 10:14:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:12.536 * Looking for test storage... 00:09:12.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.536 10:14:25 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.536 10:14:25 -- nvmf/common.sh@7 -- # uname -s 00:09:12.536 10:14:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.537 10:14:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.537 10:14:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.537 10:14:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.537 10:14:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.537 10:14:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.537 10:14:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.537 10:14:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.537 10:14:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.537 10:14:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:09:12.537 10:14:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:09:12.537 10:14:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.537 10:14:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.537 10:14:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.537 10:14:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.537 10:14:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.537 10:14:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.537 10:14:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.537 10:14:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.537 10:14:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.537 10:14:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.537 10:14:25 -- paths/export.sh@5 -- # export PATH 00:09:12.537 10:14:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.537 10:14:25 -- nvmf/common.sh@46 -- # : 0 00:09:12.537 10:14:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:12.537 10:14:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:12.537 10:14:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:12.537 10:14:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.537 10:14:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.537 10:14:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:12.537 10:14:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:12.537 10:14:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:12.537 10:14:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.537 10:14:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.537 10:14:25 -- target/host_management.sh@104 -- # nvmftestinit 00:09:12.537 10:14:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:12.537 10:14:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.537 10:14:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:12.537 10:14:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:12.537 10:14:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:12.537 10:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.537 10:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.537 10:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.537 10:14:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:12.537 10:14:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:12.537 10:14:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.537 10:14:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.537 10:14:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.537 10:14:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:12.537 10:14:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.537 10:14:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.537 10:14:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.537 10:14:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.537 10:14:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.537 10:14:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.537 10:14:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.537 10:14:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.537 10:14:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:12.537 Cannot find device "nvmf_init_br" 00:09:12.537 10:14:25 -- nvmf/common.sh@153 -- # true 00:09:12.537 10:14:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:12.795 Cannot find device "nvmf_tgt_br" 00:09:12.796 10:14:25 -- nvmf/common.sh@154 -- # true 00:09:12.796 10:14:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.796 Cannot find device "nvmf_tgt_br2" 00:09:12.796 10:14:26 -- nvmf/common.sh@155 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:12.796 Cannot find device "nvmf_init_br" 00:09:12.796 10:14:26 -- nvmf/common.sh@156 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:12.796 Cannot find device "nvmf_tgt_br" 00:09:12.796 10:14:26 -- nvmf/common.sh@157 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:12.796 Cannot find device "nvmf_tgt_br2" 00:09:12.796 10:14:26 -- nvmf/common.sh@158 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:12.796 Cannot find device "nvmf_br" 00:09:12.796 10:14:26 -- nvmf/common.sh@159 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:12.796 Cannot find device "nvmf_init_if" 00:09:12.796 10:14:26 -- nvmf/common.sh@160 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.796 10:14:26 -- nvmf/common.sh@161 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.796 10:14:26 -- nvmf/common.sh@162 -- # true 00:09:12.796 10:14:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.796 10:14:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.796 10:14:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.796 10:14:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.796 10:14:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.796 10:14:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.796 10:14:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.796 10:14:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.796 10:14:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.796 10:14:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:12.796 10:14:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:12.796 10:14:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:12.796 10:14:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:12.796 10:14:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.796 10:14:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.796 10:14:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.796 10:14:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:13.054 10:14:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:13.054 10:14:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.054 10:14:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.054 10:14:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.054 10:14:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.054 10:14:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.054 10:14:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:13.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:13.054 00:09:13.054 --- 10.0.0.2 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:13.054 10:14:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:13.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:13.054 00:09:13.054 --- 10.0.0.3 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:13.054 10:14:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:13.054 00:09:13.054 --- 10.0.0.1 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:13.054 10:14:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.054 10:14:26 -- nvmf/common.sh@421 -- # return 0 00:09:13.054 10:14:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:13.054 10:14:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.054 10:14:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:13.054 10:14:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:13.054 10:14:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.054 10:14:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:13.054 10:14:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:13.054 10:14:26 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:13.054 10:14:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.054 10:14:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.054 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:09:13.054 ************************************ 00:09:13.054 START TEST nvmf_host_management 00:09:13.054 ************************************ 00:09:13.054 10:14:26 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:09:13.054 10:14:26 -- target/host_management.sh@69 -- # starttarget 00:09:13.054 10:14:26 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:13.054 10:14:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:13.054 10:14:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:13.054 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:09:13.054 10:14:26 -- nvmf/common.sh@469 -- # nvmfpid=71702 00:09:13.054 10:14:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:13.054 10:14:26 -- nvmf/common.sh@470 -- # waitforlisten 71702 00:09:13.054 10:14:26 -- common/autotest_common.sh@819 -- # '[' -z 71702 ']' 00:09:13.054 10:14:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.054 10:14:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.054 10:14:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.054 10:14:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.054 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:09:13.054 [2024-07-26 10:14:26.404822] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:13.054 [2024-07-26 10:14:26.404956] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.313 [2024-07-26 10:14:26.547156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.313 [2024-07-26 10:14:26.657977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.313 [2024-07-26 10:14:26.658333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.313 [2024-07-26 10:14:26.658358] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.313 [2024-07-26 10:14:26.658370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.313 [2024-07-26 10:14:26.658927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.313 [2024-07-26 10:14:26.660672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.313 [2024-07-26 10:14:26.660819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:13.313 [2024-07-26 10:14:26.660905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.249 10:14:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.249 10:14:27 -- common/autotest_common.sh@852 -- # return 0 00:09:14.249 10:14:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:14.249 10:14:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 10:14:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.249 10:14:27 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.249 10:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 [2024-07-26 10:14:27.439566] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.249 10:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.249 10:14:27 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:14.249 10:14:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 10:14:27 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:14.249 10:14:27 -- target/host_management.sh@23 -- # cat 00:09:14.249 10:14:27 -- target/host_management.sh@30 -- # rpc_cmd 00:09:14.249 10:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 Malloc0 00:09:14.249 [2024-07-26 10:14:27.519941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.249 10:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.249 10:14:27 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:14.249 10:14:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 10:14:27 -- target/host_management.sh@73 -- # perfpid=71759 00:09:14.249 10:14:27 -- target/host_management.sh@74 -- # waitforlisten 71759 /var/tmp/bdevperf.sock 00:09:14.249 10:14:27 -- common/autotest_common.sh@819 -- # '[' -z 71759 ']' 00:09:14.249 10:14:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.249 10:14:27 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:14.249 10:14:27 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:14.249 10:14:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.249 10:14:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.249 10:14:27 -- nvmf/common.sh@520 -- # config=() 00:09:14.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.249 10:14:27 -- nvmf/common.sh@520 -- # local subsystem config 00:09:14.249 10:14:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.249 10:14:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:14.249 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 10:14:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:14.249 { 00:09:14.249 "params": { 00:09:14.249 "name": "Nvme$subsystem", 00:09:14.249 "trtype": "$TEST_TRANSPORT", 00:09:14.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.249 "adrfam": "ipv4", 00:09:14.249 "trsvcid": "$NVMF_PORT", 00:09:14.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.249 "hdgst": ${hdgst:-false}, 00:09:14.249 "ddgst": ${ddgst:-false} 00:09:14.249 }, 00:09:14.249 "method": "bdev_nvme_attach_controller" 00:09:14.249 } 00:09:14.249 EOF 00:09:14.249 )") 00:09:14.249 10:14:27 -- nvmf/common.sh@542 -- # cat 00:09:14.249 10:14:27 -- nvmf/common.sh@544 -- # jq . 00:09:14.249 10:14:27 -- nvmf/common.sh@545 -- # IFS=, 00:09:14.249 10:14:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:14.249 "params": { 00:09:14.249 "name": "Nvme0", 00:09:14.249 "trtype": "tcp", 00:09:14.249 "traddr": "10.0.0.2", 00:09:14.249 "adrfam": "ipv4", 00:09:14.249 "trsvcid": "4420", 00:09:14.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:14.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:14.249 "hdgst": false, 00:09:14.249 "ddgst": false 00:09:14.249 }, 00:09:14.249 "method": "bdev_nvme_attach_controller" 00:09:14.249 }' 00:09:14.249 [2024-07-26 10:14:27.631931] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:14.249 [2024-07-26 10:14:27.632054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:09:14.508 [2024-07-26 10:14:27.775834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.508 [2024-07-26 10:14:27.877439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.768 Running I/O for 10 seconds... 00:09:15.337 10:14:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:15.337 10:14:28 -- common/autotest_common.sh@852 -- # return 0 00:09:15.337 10:14:28 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:15.337 10:14:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.337 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:09:15.337 10:14:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.337 10:14:28 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.337 10:14:28 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:15.337 10:14:28 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:15.337 10:14:28 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:15.337 10:14:28 -- target/host_management.sh@52 -- # local ret=1 00:09:15.337 10:14:28 -- target/host_management.sh@53 -- # local i 00:09:15.337 10:14:28 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:15.337 10:14:28 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:15.337 10:14:28 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:15.337 10:14:28 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:15.337 10:14:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.337 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:09:15.337 10:14:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.337 10:14:28 -- target/host_management.sh@55 -- # read_io_count=1398 00:09:15.337 10:14:28 -- target/host_management.sh@58 -- # '[' 1398 -ge 100 ']' 00:09:15.337 10:14:28 -- target/host_management.sh@59 -- # ret=0 00:09:15.337 10:14:28 -- target/host_management.sh@60 -- # break 00:09:15.337 10:14:28 -- target/host_management.sh@64 -- # return 0 00:09:15.338 10:14:28 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:15.338 10:14:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.338 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:09:15.338 [2024-07-26 10:14:28.636656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0db0 is same with the state(5) to be set 00:09:15.338 [2024-07-26 10:14:28.636927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.636959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.338 [2024-07-26 10:14:28.637516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.338 [2024-07-26 10:14:28.637525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.637983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.637991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.339 [2024-07-26 10:14:28.638316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.339 [2024-07-26 10:14:28.638325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.340 [2024-07-26 10:14:28.638336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.340 [2024-07-26 10:14:28.638345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:15.340 [2024-07-26 10:14:28.638355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5090 is same with the state(5) to be set 00:09:15.340 [2024-07-26 10:14:28.638431] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcb5090 was disconnected and freed. reset controller. 00:09:15.340 [2024-07-26 10:14:28.639617] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:15.340 10:14:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.340 10:14:28 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:15.340 task offset: 67200 on job bdev=Nvme0n1 fails 00:09:15.340 00:09:15.340 Latency(us) 00:09:15.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.340 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:15.340 Job: Nvme0n1 ended in about 0.59 seconds with error 00:09:15.340 Verification LBA range: start 0x0 length 0x400 00:09:15.340 Nvme0n1 : 0.59 2615.98 163.50 109.35 0.00 23102.93 5957.82 30146.56 00:09:15.340 =================================================================================================================== 00:09:15.340 Total : 2615.98 163.50 109.35 0.00 23102.93 5957.82 30146.56 00:09:15.340 10:14:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.340 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:09:15.340 [2024-07-26 10:14:28.641878] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.340 [2024-07-26 10:14:28.641914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78bc0 (9): Bad file descriptor 00:09:15.340 10:14:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.340 10:14:28 -- target/host_management.sh@87 -- # sleep 1 00:09:15.340 [2024-07-26 10:14:28.652834] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.276 10:14:29 -- target/host_management.sh@91 -- # kill -9 71759 00:09:16.276 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71759) - No such process 00:09:16.276 10:14:29 -- target/host_management.sh@91 -- # true 00:09:16.276 10:14:29 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:16.276 10:14:29 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:16.276 10:14:29 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:16.276 10:14:29 -- nvmf/common.sh@520 -- # config=() 00:09:16.276 10:14:29 -- nvmf/common.sh@520 -- # local subsystem config 00:09:16.276 10:14:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:16.276 10:14:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:16.276 { 00:09:16.276 "params": { 00:09:16.276 "name": "Nvme$subsystem", 00:09:16.276 "trtype": "$TEST_TRANSPORT", 00:09:16.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.276 "adrfam": "ipv4", 00:09:16.276 "trsvcid": "$NVMF_PORT", 00:09:16.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.276 "hdgst": ${hdgst:-false}, 00:09:16.276 "ddgst": ${ddgst:-false} 00:09:16.276 }, 00:09:16.276 "method": "bdev_nvme_attach_controller" 00:09:16.276 } 00:09:16.276 EOF 00:09:16.276 )") 00:09:16.276 10:14:29 -- nvmf/common.sh@542 -- # cat 00:09:16.276 10:14:29 -- nvmf/common.sh@544 -- # jq . 00:09:16.276 10:14:29 -- nvmf/common.sh@545 -- # IFS=, 00:09:16.276 10:14:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:16.276 "params": { 00:09:16.276 "name": "Nvme0", 00:09:16.276 "trtype": "tcp", 00:09:16.276 "traddr": "10.0.0.2", 00:09:16.276 "adrfam": "ipv4", 00:09:16.276 "trsvcid": "4420", 00:09:16.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:16.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:16.276 "hdgst": false, 00:09:16.276 "ddgst": false 00:09:16.276 }, 00:09:16.276 "method": "bdev_nvme_attach_controller" 00:09:16.276 }' 00:09:16.276 [2024-07-26 10:14:29.704042] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:16.276 [2024-07-26 10:14:29.704145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71797 ] 00:09:16.535 [2024-07-26 10:14:29.838998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.535 [2024-07-26 10:14:29.951617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.793 Running I/O for 1 seconds... 00:09:17.730 00:09:17.730 Latency(us) 00:09:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:17.730 Verification LBA range: start 0x0 length 0x400 00:09:17.730 Nvme0n1 : 1.02 2774.54 173.41 0.00 0.00 22697.15 1638.40 28001.75 00:09:17.730 =================================================================================================================== 00:09:17.730 Total : 2774.54 173.41 0.00 0.00 22697.15 1638.40 28001.75 00:09:17.989 10:14:31 -- target/host_management.sh@101 -- # stoptarget 00:09:17.989 10:14:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:17.989 10:14:31 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:17.989 10:14:31 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:17.989 10:14:31 -- target/host_management.sh@40 -- # nvmftestfini 00:09:17.989 10:14:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:17.989 10:14:31 -- nvmf/common.sh@116 -- # sync 00:09:18.247 10:14:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:18.247 10:14:31 -- nvmf/common.sh@119 -- # set +e 00:09:18.247 10:14:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:18.247 10:14:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:18.247 rmmod nvme_tcp 00:09:18.247 rmmod nvme_fabrics 00:09:18.247 rmmod nvme_keyring 00:09:18.247 10:14:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:18.247 10:14:31 -- nvmf/common.sh@123 -- # set -e 00:09:18.247 10:14:31 -- nvmf/common.sh@124 -- # return 0 00:09:18.247 10:14:31 -- nvmf/common.sh@477 -- # '[' -n 71702 ']' 00:09:18.247 10:14:31 -- nvmf/common.sh@478 -- # killprocess 71702 00:09:18.247 10:14:31 -- common/autotest_common.sh@926 -- # '[' -z 71702 ']' 00:09:18.247 10:14:31 -- common/autotest_common.sh@930 -- # kill -0 71702 00:09:18.247 10:14:31 -- common/autotest_common.sh@931 -- # uname 00:09:18.247 10:14:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:18.247 10:14:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71702 00:09:18.247 10:14:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:18.247 10:14:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:18.247 killing process with pid 71702 00:09:18.247 10:14:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71702' 00:09:18.247 10:14:31 -- common/autotest_common.sh@945 -- # kill 71702 00:09:18.247 10:14:31 -- common/autotest_common.sh@950 -- # wait 71702 00:09:18.506 [2024-07-26 10:14:31.783096] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:18.506 10:14:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:18.506 10:14:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:18.506 10:14:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:18.506 10:14:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.506 10:14:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:18.506 10:14:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.506 10:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.506 10:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.506 10:14:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:18.506 00:09:18.506 real 0m5.504s 00:09:18.506 user 0m22.922s 00:09:18.506 sys 0m1.378s 00:09:18.506 10:14:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.506 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:09:18.506 ************************************ 00:09:18.506 END TEST nvmf_host_management 00:09:18.506 ************************************ 00:09:18.506 10:14:31 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:18.506 00:09:18.506 real 0m6.030s 00:09:18.506 user 0m23.048s 00:09:18.506 sys 0m1.600s 00:09:18.506 10:14:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.506 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:09:18.506 ************************************ 00:09:18.506 END TEST nvmf_host_management 00:09:18.506 ************************************ 00:09:18.506 10:14:31 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:18.506 10:14:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:18.506 10:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:18.506 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:09:18.506 ************************************ 00:09:18.506 START TEST nvmf_lvol 00:09:18.506 ************************************ 00:09:18.506 10:14:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:18.765 * Looking for test storage... 00:09:18.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.765 10:14:32 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.765 10:14:32 -- nvmf/common.sh@7 -- # uname -s 00:09:18.765 10:14:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.765 10:14:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.765 10:14:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.765 10:14:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.765 10:14:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.765 10:14:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.765 10:14:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.765 10:14:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.765 10:14:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.765 10:14:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.765 10:14:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:09:18.765 10:14:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:09:18.765 10:14:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.765 10:14:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.765 10:14:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.765 10:14:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.765 10:14:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.765 10:14:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.765 10:14:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.765 10:14:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.765 10:14:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.765 10:14:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.765 10:14:32 -- paths/export.sh@5 -- # export PATH 00:09:18.765 10:14:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.765 10:14:32 -- nvmf/common.sh@46 -- # : 0 00:09:18.765 10:14:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:18.765 10:14:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:18.765 10:14:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:18.765 10:14:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.765 10:14:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.765 10:14:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:18.765 10:14:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:18.765 10:14:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:18.765 10:14:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.765 10:14:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.765 10:14:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:18.765 10:14:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:18.766 10:14:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.766 10:14:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:18.766 10:14:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:18.766 10:14:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.766 10:14:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:18.766 10:14:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:18.766 10:14:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:18.766 10:14:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.766 10:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.766 10:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.766 10:14:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:18.766 10:14:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:18.766 10:14:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:18.766 10:14:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:18.766 10:14:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:18.766 10:14:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:18.766 10:14:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.766 10:14:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.766 10:14:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.766 10:14:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:18.766 10:14:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.766 10:14:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.766 10:14:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.766 10:14:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.766 10:14:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.766 10:14:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.766 10:14:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.766 10:14:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.766 10:14:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:18.766 10:14:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:18.766 Cannot find device "nvmf_tgt_br" 00:09:18.766 10:14:32 -- nvmf/common.sh@154 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.766 Cannot find device "nvmf_tgt_br2" 00:09:18.766 10:14:32 -- nvmf/common.sh@155 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:18.766 10:14:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:18.766 Cannot find device "nvmf_tgt_br" 00:09:18.766 10:14:32 -- nvmf/common.sh@157 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:18.766 Cannot find device "nvmf_tgt_br2" 00:09:18.766 10:14:32 -- nvmf/common.sh@158 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:18.766 10:14:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:18.766 10:14:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.766 10:14:32 -- nvmf/common.sh@161 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.766 10:14:32 -- nvmf/common.sh@162 -- # true 00:09:18.766 10:14:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.766 10:14:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.766 10:14:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.766 10:14:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.766 10:14:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.025 10:14:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.025 10:14:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.025 10:14:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.025 10:14:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.025 10:14:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:19.025 10:14:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:19.025 10:14:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:19.025 10:14:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:19.025 10:14:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.025 10:14:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.025 10:14:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.025 10:14:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:19.025 10:14:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:19.025 10:14:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.025 10:14:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.025 10:14:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.025 10:14:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.025 10:14:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.025 10:14:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:19.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:19.025 00:09:19.025 --- 10.0.0.2 ping statistics --- 00:09:19.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.025 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:19.025 10:14:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:19.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:19.025 00:09:19.025 --- 10.0.0.3 ping statistics --- 00:09:19.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.025 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:19.025 10:14:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:19.025 00:09:19.025 --- 10.0.0.1 ping statistics --- 00:09:19.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.025 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:19.025 10:14:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.025 10:14:32 -- nvmf/common.sh@421 -- # return 0 00:09:19.025 10:14:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:19.025 10:14:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.025 10:14:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:19.025 10:14:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:19.025 10:14:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.025 10:14:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:19.025 10:14:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:19.025 10:14:32 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:19.025 10:14:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:19.025 10:14:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:19.025 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.025 10:14:32 -- nvmf/common.sh@469 -- # nvmfpid=72032 00:09:19.025 10:14:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:19.025 10:14:32 -- nvmf/common.sh@470 -- # waitforlisten 72032 00:09:19.025 10:14:32 -- common/autotest_common.sh@819 -- # '[' -z 72032 ']' 00:09:19.025 10:14:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.025 10:14:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.025 10:14:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.025 10:14:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.025 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.025 [2024-07-26 10:14:32.440420] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:19.025 [2024-07-26 10:14:32.440558] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.284 [2024-07-26 10:14:32.583475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.284 [2024-07-26 10:14:32.691522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.284 [2024-07-26 10:14:32.692013] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.284 [2024-07-26 10:14:32.692090] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.284 [2024-07-26 10:14:32.692274] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.284 [2024-07-26 10:14:32.692543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.284 [2024-07-26 10:14:32.692734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.284 [2024-07-26 10:14:32.692808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.220 10:14:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.220 10:14:33 -- common/autotest_common.sh@852 -- # return 0 00:09:20.220 10:14:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:20.220 10:14:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:20.220 10:14:33 -- common/autotest_common.sh@10 -- # set +x 00:09:20.220 10:14:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.220 10:14:33 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.220 [2024-07-26 10:14:33.665924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.478 10:14:33 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.736 10:14:33 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:20.736 10:14:33 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.995 10:14:34 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:20.995 10:14:34 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:21.253 10:14:34 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:21.512 10:14:34 -- target/nvmf_lvol.sh@29 -- # lvs=03ae55db-7b66-4b48-94b7-3741c46ca5e2 00:09:21.512 10:14:34 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 03ae55db-7b66-4b48-94b7-3741c46ca5e2 lvol 20 00:09:21.770 10:14:34 -- target/nvmf_lvol.sh@32 -- # lvol=c0b6b1cf-4143-49b9-ba6f-d2406f0144f5 00:09:21.770 10:14:34 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.770 10:14:35 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0b6b1cf-4143-49b9-ba6f-d2406f0144f5 00:09:22.029 10:14:35 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:22.287 [2024-07-26 10:14:35.646760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.288 10:14:35 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.546 10:14:35 -- target/nvmf_lvol.sh@42 -- # perf_pid=72102 00:09:22.546 10:14:35 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:22.546 10:14:35 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:23.481 10:14:36 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c0b6b1cf-4143-49b9-ba6f-d2406f0144f5 MY_SNAPSHOT 00:09:23.741 10:14:37 -- target/nvmf_lvol.sh@47 -- # snapshot=9683bca6-86be-4238-9048-a94ca90541f5 00:09:23.741 10:14:37 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c0b6b1cf-4143-49b9-ba6f-d2406f0144f5 30 00:09:24.000 10:14:37 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9683bca6-86be-4238-9048-a94ca90541f5 MY_CLONE 00:09:24.259 10:14:37 -- target/nvmf_lvol.sh@49 -- # clone=8cfa9e55-a769-426f-bb7c-018bbce2b763 00:09:24.259 10:14:37 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8cfa9e55-a769-426f-bb7c-018bbce2b763 00:09:24.841 10:14:38 -- target/nvmf_lvol.sh@53 -- # wait 72102 00:09:32.993 Initializing NVMe Controllers 00:09:32.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:32.993 Controller IO queue size 128, less than required. 00:09:32.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:32.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:32.994 Initialization complete. Launching workers. 00:09:32.994 ======================================================== 00:09:32.994 Latency(us) 00:09:32.994 Device Information : IOPS MiB/s Average min max 00:09:32.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9435.09 36.86 13566.70 471.57 71675.09 00:09:32.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9513.79 37.16 13454.92 2915.09 71108.10 00:09:32.994 ======================================================== 00:09:32.994 Total : 18948.88 74.02 13510.58 471.57 71675.09 00:09:32.994 00:09:32.994 10:14:46 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.253 10:14:46 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c0b6b1cf-4143-49b9-ba6f-d2406f0144f5 00:09:33.512 10:14:46 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03ae55db-7b66-4b48-94b7-3741c46ca5e2 00:09:33.771 10:14:47 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:33.771 10:14:47 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:33.771 10:14:47 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:33.771 10:14:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:33.771 10:14:47 -- nvmf/common.sh@116 -- # sync 00:09:33.771 10:14:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:33.771 10:14:47 -- nvmf/common.sh@119 -- # set +e 00:09:33.771 10:14:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:33.771 10:14:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:33.771 rmmod nvme_tcp 00:09:33.771 rmmod nvme_fabrics 00:09:33.771 rmmod nvme_keyring 00:09:33.771 10:14:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:33.771 10:14:47 -- nvmf/common.sh@123 -- # set -e 00:09:33.771 10:14:47 -- nvmf/common.sh@124 -- # return 0 00:09:33.771 10:14:47 -- nvmf/common.sh@477 -- # '[' -n 72032 ']' 00:09:33.771 10:14:47 -- nvmf/common.sh@478 -- # killprocess 72032 00:09:33.771 10:14:47 -- common/autotest_common.sh@926 -- # '[' -z 72032 ']' 00:09:33.771 10:14:47 -- common/autotest_common.sh@930 -- # kill -0 72032 00:09:33.771 10:14:47 -- common/autotest_common.sh@931 -- # uname 00:09:33.771 10:14:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.771 10:14:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72032 00:09:33.771 10:14:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.771 10:14:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.771 10:14:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72032' 00:09:33.771 killing process with pid 72032 00:09:33.771 10:14:47 -- common/autotest_common.sh@945 -- # kill 72032 00:09:33.771 10:14:47 -- common/autotest_common.sh@950 -- # wait 72032 00:09:34.338 10:14:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:34.338 10:14:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:34.338 10:14:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:34.338 10:14:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.338 10:14:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:34.338 10:14:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.338 10:14:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.338 10:14:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.338 10:14:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:34.338 00:09:34.338 real 0m15.650s 00:09:34.338 user 1m4.240s 00:09:34.338 sys 0m4.837s 00:09:34.338 10:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.338 10:14:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.338 ************************************ 00:09:34.338 END TEST nvmf_lvol 00:09:34.338 ************************************ 00:09:34.338 10:14:47 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:34.338 10:14:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:34.338 10:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.338 10:14:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.338 ************************************ 00:09:34.338 START TEST nvmf_lvs_grow 00:09:34.338 ************************************ 00:09:34.338 10:14:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:34.338 * Looking for test storage... 00:09:34.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.338 10:14:47 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.338 10:14:47 -- nvmf/common.sh@7 -- # uname -s 00:09:34.338 10:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.338 10:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.338 10:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.338 10:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.338 10:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.338 10:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.338 10:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.338 10:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.338 10:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.338 10:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.338 10:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:09:34.338 10:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:09:34.338 10:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.338 10:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.338 10:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.338 10:14:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.338 10:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.338 10:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.338 10:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.338 10:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.338 10:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.339 10:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.339 10:14:47 -- paths/export.sh@5 -- # export PATH 00:09:34.339 10:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.339 10:14:47 -- nvmf/common.sh@46 -- # : 0 00:09:34.339 10:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:34.339 10:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:34.339 10:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:34.339 10:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.339 10:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.339 10:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:34.339 10:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:34.339 10:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:34.339 10:14:47 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.339 10:14:47 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:34.339 10:14:47 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:34.339 10:14:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:34.339 10:14:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.339 10:14:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:34.339 10:14:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:34.339 10:14:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:34.339 10:14:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.339 10:14:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.339 10:14:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.339 10:14:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:34.339 10:14:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:34.339 10:14:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:34.339 10:14:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:34.339 10:14:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:34.339 10:14:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:34.339 10:14:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.339 10:14:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.339 10:14:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.339 10:14:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:34.339 10:14:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.339 10:14:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.339 10:14:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.339 10:14:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.339 10:14:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.339 10:14:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.339 10:14:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.339 10:14:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.339 10:14:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:34.339 10:14:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:34.597 Cannot find device "nvmf_tgt_br" 00:09:34.598 10:14:47 -- nvmf/common.sh@154 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.598 Cannot find device "nvmf_tgt_br2" 00:09:34.598 10:14:47 -- nvmf/common.sh@155 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:34.598 10:14:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:34.598 Cannot find device "nvmf_tgt_br" 00:09:34.598 10:14:47 -- nvmf/common.sh@157 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:34.598 Cannot find device "nvmf_tgt_br2" 00:09:34.598 10:14:47 -- nvmf/common.sh@158 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:34.598 10:14:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:34.598 10:14:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.598 10:14:47 -- nvmf/common.sh@161 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.598 10:14:47 -- nvmf/common.sh@162 -- # true 00:09:34.598 10:14:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.598 10:14:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.598 10:14:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.598 10:14:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.598 10:14:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.598 10:14:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.598 10:14:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.598 10:14:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.598 10:14:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.598 10:14:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:34.598 10:14:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:34.598 10:14:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:34.598 10:14:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:34.598 10:14:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.598 10:14:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.598 10:14:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.598 10:14:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:34.598 10:14:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:34.598 10:14:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.598 10:14:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.598 10:14:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.857 10:14:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.858 10:14:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.858 10:14:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:34.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:09:34.858 00:09:34.858 --- 10.0.0.2 ping statistics --- 00:09:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.858 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:34.858 10:14:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:34.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:34.858 00:09:34.858 --- 10.0.0.3 ping statistics --- 00:09:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.858 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:34.858 10:14:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:34.858 00:09:34.858 --- 10.0.0.1 ping statistics --- 00:09:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.858 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:34.858 10:14:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.858 10:14:48 -- nvmf/common.sh@421 -- # return 0 00:09:34.858 10:14:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:34.858 10:14:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.858 10:14:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:34.858 10:14:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:34.858 10:14:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.858 10:14:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:34.858 10:14:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:34.858 10:14:48 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:34.858 10:14:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:34.858 10:14:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:34.858 10:14:48 -- common/autotest_common.sh@10 -- # set +x 00:09:34.858 10:14:48 -- nvmf/common.sh@469 -- # nvmfpid=72432 00:09:34.858 10:14:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.858 10:14:48 -- nvmf/common.sh@470 -- # waitforlisten 72432 00:09:34.858 10:14:48 -- common/autotest_common.sh@819 -- # '[' -z 72432 ']' 00:09:34.858 10:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.858 10:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.858 10:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.858 10:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.858 10:14:48 -- common/autotest_common.sh@10 -- # set +x 00:09:34.858 [2024-07-26 10:14:48.161611] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:34.858 [2024-07-26 10:14:48.161711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.858 [2024-07-26 10:14:48.298204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.117 [2024-07-26 10:14:48.429635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.117 [2024-07-26 10:14:48.429834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.117 [2024-07-26 10:14:48.429852] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.117 [2024-07-26 10:14:48.429864] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.117 [2024-07-26 10:14:48.429900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.055 10:14:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.055 10:14:49 -- common/autotest_common.sh@852 -- # return 0 00:09:36.055 10:14:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:36.055 10:14:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:36.055 10:14:49 -- common/autotest_common.sh@10 -- # set +x 00:09:36.055 10:14:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.055 [2024-07-26 10:14:49.433401] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:36.055 10:14:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:36.055 10:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.055 10:14:49 -- common/autotest_common.sh@10 -- # set +x 00:09:36.055 ************************************ 00:09:36.055 START TEST lvs_grow_clean 00:09:36.055 ************************************ 00:09:36.055 10:14:49 -- common/autotest_common.sh@1104 -- # lvs_grow 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:36.055 10:14:49 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.313 10:14:49 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:36.313 10:14:49 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=51494de3-02ed-40df-bbfa-d5807b76039d 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:36.880 10:14:50 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 51494de3-02ed-40df-bbfa-d5807b76039d lvol 150 00:09:37.139 10:14:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b8adf50-734f-41b5-b1a9-f9243ee734f7 00:09:37.139 10:14:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.139 10:14:50 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:37.397 [2024-07-26 10:14:50.700561] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:37.397 [2024-07-26 10:14:50.700714] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:37.397 true 00:09:37.397 10:14:50 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:37.397 10:14:50 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:37.656 10:14:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:37.656 10:14:50 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:37.656 10:14:51 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b8adf50-734f-41b5-b1a9-f9243ee734f7 00:09:37.914 10:14:51 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:38.173 [2024-07-26 10:14:51.561222] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.173 10:14:51 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.431 10:14:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72515 00:09:38.431 10:14:51 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:38.431 10:14:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.431 10:14:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72515 /var/tmp/bdevperf.sock 00:09:38.431 10:14:51 -- common/autotest_common.sh@819 -- # '[' -z 72515 ']' 00:09:38.431 10:14:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:38.431 10:14:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:38.431 10:14:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:38.431 10:14:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.431 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.431 [2024-07-26 10:14:51.832608] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:38.431 [2024-07-26 10:14:51.832739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:09:38.689 [2024-07-26 10:14:51.974180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.689 [2024-07-26 10:14:52.057085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.621 10:14:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.621 10:14:52 -- common/autotest_common.sh@852 -- # return 0 00:09:39.621 10:14:52 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:39.621 Nvme0n1 00:09:39.621 10:14:53 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:39.879 [ 00:09:39.879 { 00:09:39.879 "name": "Nvme0n1", 00:09:39.879 "aliases": [ 00:09:39.879 "7b8adf50-734f-41b5-b1a9-f9243ee734f7" 00:09:39.879 ], 00:09:39.879 "product_name": "NVMe disk", 00:09:39.879 "block_size": 4096, 00:09:39.879 "num_blocks": 38912, 00:09:39.879 "uuid": "7b8adf50-734f-41b5-b1a9-f9243ee734f7", 00:09:39.879 "assigned_rate_limits": { 00:09:39.879 "rw_ios_per_sec": 0, 00:09:39.879 "rw_mbytes_per_sec": 0, 00:09:39.879 "r_mbytes_per_sec": 0, 00:09:39.879 "w_mbytes_per_sec": 0 00:09:39.879 }, 00:09:39.879 "claimed": false, 00:09:39.879 "zoned": false, 00:09:39.879 "supported_io_types": { 00:09:39.879 "read": true, 00:09:39.879 "write": true, 00:09:39.879 "unmap": true, 00:09:39.879 "write_zeroes": true, 00:09:39.879 "flush": true, 00:09:39.879 "reset": true, 00:09:39.879 "compare": true, 00:09:39.879 "compare_and_write": true, 00:09:39.879 "abort": true, 00:09:39.879 "nvme_admin": true, 00:09:39.879 "nvme_io": true 00:09:39.879 }, 00:09:39.879 "driver_specific": { 00:09:39.879 "nvme": [ 00:09:39.879 { 00:09:39.879 "trid": { 00:09:39.879 "trtype": "TCP", 00:09:39.879 "adrfam": "IPv4", 00:09:39.879 "traddr": "10.0.0.2", 00:09:39.879 "trsvcid": "4420", 00:09:39.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:39.879 }, 00:09:39.879 "ctrlr_data": { 00:09:39.879 "cntlid": 1, 00:09:39.879 "vendor_id": "0x8086", 00:09:39.879 "model_number": "SPDK bdev Controller", 00:09:39.879 "serial_number": "SPDK0", 00:09:39.879 "firmware_revision": "24.01.1", 00:09:39.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:39.879 "oacs": { 00:09:39.879 "security": 0, 00:09:39.879 "format": 0, 00:09:39.879 "firmware": 0, 00:09:39.879 "ns_manage": 0 00:09:39.879 }, 00:09:39.879 "multi_ctrlr": true, 00:09:39.879 "ana_reporting": false 00:09:39.879 }, 00:09:39.879 "vs": { 00:09:39.879 "nvme_version": "1.3" 00:09:39.879 }, 00:09:39.879 "ns_data": { 00:09:39.879 "id": 1, 00:09:39.879 "can_share": true 00:09:39.879 } 00:09:39.879 } 00:09:39.879 ], 00:09:39.879 "mp_policy": "active_passive" 00:09:39.879 } 00:09:39.879 } 00:09:39.879 ] 00:09:39.879 10:14:53 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:39.879 10:14:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72533 00:09:39.879 10:14:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:39.879 Running I/O for 10 seconds... 00:09:41.253 Latency(us) 00:09:41.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.253 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:41.253 =================================================================================================================== 00:09:41.253 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:41.253 00:09:41.820 10:14:55 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:42.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.078 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:42.078 =================================================================================================================== 00:09:42.078 Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:42.078 00:09:42.078 true 00:09:42.078 10:14:55 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:42.078 10:14:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:42.646 10:14:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:42.646 10:14:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:42.646 10:14:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 72533 00:09:42.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.905 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:09:42.905 =================================================================================================================== 00:09:42.905 Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:09:42.905 00:09:44.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.281 Nvme0n1 : 4.00 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:09:44.281 =================================================================================================================== 00:09:44.281 Total : 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:09:44.281 00:09:45.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.217 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:45.217 =================================================================================================================== 00:09:45.217 Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:45.217 00:09:46.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.153 Nvme0n1 : 6.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:09:46.153 =================================================================================================================== 00:09:46.153 Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:09:46.153 00:09:47.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.089 Nvme0n1 : 7.00 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:09:47.089 =================================================================================================================== 00:09:47.089 Total : 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:09:47.089 00:09:48.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.023 Nvme0n1 : 8.00 6492.88 25.36 0.00 0.00 0.00 0.00 0.00 00:09:48.023 =================================================================================================================== 00:09:48.023 Total : 6492.88 25.36 0.00 0.00 0.00 0.00 0.00 00:09:48.023 00:09:48.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.959 Nvme0n1 : 9.00 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:48.959 =================================================================================================================== 00:09:48.959 Total : 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:48.959 00:09:49.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.912 Nvme0n1 : 10.00 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:49.912 =================================================================================================================== 00:09:49.912 Total : 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:49.912 00:09:49.912 00:09:49.912 Latency(us) 00:09:49.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.912 Nvme0n1 : 10.01 6508.70 25.42 0.00 0.00 19660.25 16801.05 45756.04 00:09:49.912 =================================================================================================================== 00:09:49.912 Total : 6508.70 25.42 0.00 0.00 19660.25 16801.05 45756.04 00:09:49.912 0 00:09:49.912 10:15:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72515 00:09:49.912 10:15:03 -- common/autotest_common.sh@926 -- # '[' -z 72515 ']' 00:09:49.912 10:15:03 -- common/autotest_common.sh@930 -- # kill -0 72515 00:09:49.912 10:15:03 -- common/autotest_common.sh@931 -- # uname 00:09:49.912 10:15:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.912 10:15:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72515 00:09:50.171 10:15:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:50.171 10:15:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:50.171 killing process with pid 72515 00:09:50.171 10:15:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72515' 00:09:50.171 Received shutdown signal, test time was about 10.000000 seconds 00:09:50.171 00:09:50.171 Latency(us) 00:09:50.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.171 =================================================================================================================== 00:09:50.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:50.171 10:15:03 -- common/autotest_common.sh@945 -- # kill 72515 00:09:50.171 10:15:03 -- common/autotest_common.sh@950 -- # wait 72515 00:09:50.171 10:15:03 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:50.429 10:15:03 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:50.429 10:15:03 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:50.687 10:15:04 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:50.687 10:15:04 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:50.687 10:15:04 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:50.945 [2024-07-26 10:15:04.368353] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.203 10:15:04 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:51.203 10:15:04 -- common/autotest_common.sh@640 -- # local es=0 00:09:51.203 10:15:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:51.203 10:15:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.203 10:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.203 10:15:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.203 10:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.203 10:15:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.203 10:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.203 10:15:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.203 10:15:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:51.203 10:15:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:51.203 request: 00:09:51.203 { 00:09:51.203 "uuid": "51494de3-02ed-40df-bbfa-d5807b76039d", 00:09:51.203 "method": "bdev_lvol_get_lvstores", 00:09:51.203 "req_id": 1 00:09:51.203 } 00:09:51.203 Got JSON-RPC error response 00:09:51.203 response: 00:09:51.203 { 00:09:51.203 "code": -19, 00:09:51.203 "message": "No such device" 00:09:51.203 } 00:09:51.203 10:15:04 -- common/autotest_common.sh@643 -- # es=1 00:09:51.203 10:15:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:51.203 10:15:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:51.203 10:15:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:51.203 10:15:04 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.461 aio_bdev 00:09:51.461 10:15:04 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7b8adf50-734f-41b5-b1a9-f9243ee734f7 00:09:51.461 10:15:04 -- common/autotest_common.sh@887 -- # local bdev_name=7b8adf50-734f-41b5-b1a9-f9243ee734f7 00:09:51.461 10:15:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:51.461 10:15:04 -- common/autotest_common.sh@889 -- # local i 00:09:51.461 10:15:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:51.461 10:15:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:51.461 10:15:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.719 10:15:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b8adf50-734f-41b5-b1a9-f9243ee734f7 -t 2000 00:09:51.981 [ 00:09:51.981 { 00:09:51.981 "name": "7b8adf50-734f-41b5-b1a9-f9243ee734f7", 00:09:51.981 "aliases": [ 00:09:51.981 "lvs/lvol" 00:09:51.981 ], 00:09:51.981 "product_name": "Logical Volume", 00:09:51.981 "block_size": 4096, 00:09:51.981 "num_blocks": 38912, 00:09:51.981 "uuid": "7b8adf50-734f-41b5-b1a9-f9243ee734f7", 00:09:51.981 "assigned_rate_limits": { 00:09:51.981 "rw_ios_per_sec": 0, 00:09:51.981 "rw_mbytes_per_sec": 0, 00:09:51.981 "r_mbytes_per_sec": 0, 00:09:51.981 "w_mbytes_per_sec": 0 00:09:51.981 }, 00:09:51.981 "claimed": false, 00:09:51.981 "zoned": false, 00:09:51.981 "supported_io_types": { 00:09:51.981 "read": true, 00:09:51.981 "write": true, 00:09:51.981 "unmap": true, 00:09:51.981 "write_zeroes": true, 00:09:51.981 "flush": false, 00:09:51.981 "reset": true, 00:09:51.981 "compare": false, 00:09:51.981 "compare_and_write": false, 00:09:51.981 "abort": false, 00:09:51.981 "nvme_admin": false, 00:09:51.981 "nvme_io": false 00:09:51.981 }, 00:09:51.981 "driver_specific": { 00:09:51.981 "lvol": { 00:09:51.981 "lvol_store_uuid": "51494de3-02ed-40df-bbfa-d5807b76039d", 00:09:51.981 "base_bdev": "aio_bdev", 00:09:51.981 "thin_provision": false, 00:09:51.981 "snapshot": false, 00:09:51.981 "clone": false, 00:09:51.981 "esnap_clone": false 00:09:51.981 } 00:09:51.981 } 00:09:51.981 } 00:09:51.981 ] 00:09:51.981 10:15:05 -- common/autotest_common.sh@895 -- # return 0 00:09:51.981 10:15:05 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:51.981 10:15:05 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:52.241 10:15:05 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:52.241 10:15:05 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:52.241 10:15:05 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:52.499 10:15:05 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:52.499 10:15:05 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7b8adf50-734f-41b5-b1a9-f9243ee734f7 00:09:52.758 10:15:06 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51494de3-02ed-40df-bbfa-d5807b76039d 00:09:53.016 10:15:06 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.275 10:15:06 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:53.534 ************************************ 00:09:53.534 END TEST lvs_grow_clean 00:09:53.534 ************************************ 00:09:53.534 00:09:53.534 real 0m17.518s 00:09:53.534 user 0m16.374s 00:09:53.534 sys 0m2.438s 00:09:53.534 10:15:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.534 10:15:06 -- common/autotest_common.sh@10 -- # set +x 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:53.793 10:15:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:53.793 10:15:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.793 10:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:53.793 ************************************ 00:09:53.793 START TEST lvs_grow_dirty 00:09:53.793 ************************************ 00:09:53.793 10:15:07 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:53.793 10:15:07 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:54.051 10:15:07 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:54.051 10:15:07 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:54.309 10:15:07 -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f713cfd-dd0c-449d-a865-d4743fc6063f 00:09:54.309 10:15:07 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:09:54.309 10:15:07 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:54.568 10:15:07 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:54.568 10:15:07 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:54.568 10:15:07 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2f713cfd-dd0c-449d-a865-d4743fc6063f lvol 150 00:09:54.827 10:15:08 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:09:54.827 10:15:08 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:54.827 10:15:08 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:54.827 [2024-07-26 10:15:08.249627] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:54.827 [2024-07-26 10:15:08.249767] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:54.827 true 00:09:54.827 10:15:08 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:09:54.827 10:15:08 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:55.086 10:15:08 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:55.086 10:15:08 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:55.345 10:15:08 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:09:55.639 10:15:09 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:55.897 10:15:09 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.157 10:15:09 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72777 00:09:56.157 10:15:09 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:56.157 10:15:09 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.157 10:15:09 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72777 /var/tmp/bdevperf.sock 00:09:56.157 10:15:09 -- common/autotest_common.sh@819 -- # '[' -z 72777 ']' 00:09:56.157 10:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.157 10:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:56.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.157 10:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.157 10:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:56.157 10:15:09 -- common/autotest_common.sh@10 -- # set +x 00:09:56.157 [2024-07-26 10:15:09.574046] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:56.157 [2024-07-26 10:15:09.574143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:09:56.416 [2024-07-26 10:15:09.712935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.416 [2024-07-26 10:15:09.820852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.353 10:15:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.353 10:15:10 -- common/autotest_common.sh@852 -- # return 0 00:09:57.353 10:15:10 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:57.353 Nvme0n1 00:09:57.353 10:15:10 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:57.612 [ 00:09:57.612 { 00:09:57.612 "name": "Nvme0n1", 00:09:57.612 "aliases": [ 00:09:57.612 "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2" 00:09:57.612 ], 00:09:57.612 "product_name": "NVMe disk", 00:09:57.612 "block_size": 4096, 00:09:57.612 "num_blocks": 38912, 00:09:57.612 "uuid": "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2", 00:09:57.612 "assigned_rate_limits": { 00:09:57.612 "rw_ios_per_sec": 0, 00:09:57.612 "rw_mbytes_per_sec": 0, 00:09:57.612 "r_mbytes_per_sec": 0, 00:09:57.612 "w_mbytes_per_sec": 0 00:09:57.612 }, 00:09:57.612 "claimed": false, 00:09:57.612 "zoned": false, 00:09:57.612 "supported_io_types": { 00:09:57.612 "read": true, 00:09:57.612 "write": true, 00:09:57.612 "unmap": true, 00:09:57.612 "write_zeroes": true, 00:09:57.612 "flush": true, 00:09:57.612 "reset": true, 00:09:57.612 "compare": true, 00:09:57.612 "compare_and_write": true, 00:09:57.612 "abort": true, 00:09:57.612 "nvme_admin": true, 00:09:57.612 "nvme_io": true 00:09:57.612 }, 00:09:57.612 "driver_specific": { 00:09:57.612 "nvme": [ 00:09:57.612 { 00:09:57.613 "trid": { 00:09:57.613 "trtype": "TCP", 00:09:57.613 "adrfam": "IPv4", 00:09:57.613 "traddr": "10.0.0.2", 00:09:57.613 "trsvcid": "4420", 00:09:57.613 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:57.613 }, 00:09:57.613 "ctrlr_data": { 00:09:57.613 "cntlid": 1, 00:09:57.613 "vendor_id": "0x8086", 00:09:57.613 "model_number": "SPDK bdev Controller", 00:09:57.613 "serial_number": "SPDK0", 00:09:57.613 "firmware_revision": "24.01.1", 00:09:57.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:57.613 "oacs": { 00:09:57.613 "security": 0, 00:09:57.613 "format": 0, 00:09:57.613 "firmware": 0, 00:09:57.613 "ns_manage": 0 00:09:57.613 }, 00:09:57.613 "multi_ctrlr": true, 00:09:57.613 "ana_reporting": false 00:09:57.613 }, 00:09:57.613 "vs": { 00:09:57.613 "nvme_version": "1.3" 00:09:57.613 }, 00:09:57.613 "ns_data": { 00:09:57.613 "id": 1, 00:09:57.613 "can_share": true 00:09:57.613 } 00:09:57.613 } 00:09:57.613 ], 00:09:57.613 "mp_policy": "active_passive" 00:09:57.613 } 00:09:57.613 } 00:09:57.613 ] 00:09:57.613 10:15:10 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.613 10:15:10 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72801 00:09:57.613 10:15:10 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:57.872 Running I/O for 10 seconds... 00:09:58.810 Latency(us) 00:09:58.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.810 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:58.810 =================================================================================================================== 00:09:58.810 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:58.810 00:09:59.746 10:15:12 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:09:59.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.746 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:59.746 =================================================================================================================== 00:09:59.746 Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:59.746 00:10:00.027 true 00:10:00.027 10:15:13 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:00.027 10:15:13 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:00.291 10:15:13 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:00.291 10:15:13 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:00.291 10:15:13 -- target/nvmf_lvs_grow.sh@65 -- # wait 72801 00:10:00.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.858 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:10:00.858 =================================================================================================================== 00:10:00.858 Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:10:00.858 00:10:01.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.795 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:01.795 =================================================================================================================== 00:10:01.795 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:01.795 00:10:02.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.732 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:02.732 =================================================================================================================== 00:10:02.732 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:02.732 00:10:03.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.669 Nvme0n1 : 6.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:10:03.669 =================================================================================================================== 00:10:03.669 Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:10:03.669 00:10:05.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.047 Nvme0n1 : 7.00 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:10:05.047 =================================================================================================================== 00:10:05.047 Total : 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:10:05.047 00:10:05.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.983 Nvme0n1 : 8.00 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:10:05.983 =================================================================================================================== 00:10:05.983 Total : 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:10:05.983 00:10:06.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.917 Nvme0n1 : 9.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:06.917 =================================================================================================================== 00:10:06.917 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:06.917 00:10:07.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.861 Nvme0n1 : 10.00 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:10:07.861 =================================================================================================================== 00:10:07.861 Total : 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:10:07.861 00:10:07.861 00:10:07.861 Latency(us) 00:10:07.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.861 Nvme0n1 : 10.02 6322.22 24.70 0.00 0.00 20240.22 17158.52 59339.87 00:10:07.861 =================================================================================================================== 00:10:07.861 Total : 6322.22 24.70 0.00 0.00 20240.22 17158.52 59339.87 00:10:07.861 0 00:10:07.861 10:15:21 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72777 00:10:07.861 10:15:21 -- common/autotest_common.sh@926 -- # '[' -z 72777 ']' 00:10:07.861 10:15:21 -- common/autotest_common.sh@930 -- # kill -0 72777 00:10:07.861 10:15:21 -- common/autotest_common.sh@931 -- # uname 00:10:07.861 10:15:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:07.861 10:15:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72777 00:10:07.861 killing process with pid 72777 00:10:07.861 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.861 00:10:07.861 Latency(us) 00:10:07.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.861 =================================================================================================================== 00:10:07.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.861 10:15:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:07.861 10:15:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:07.861 10:15:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72777' 00:10:07.861 10:15:21 -- common/autotest_common.sh@945 -- # kill 72777 00:10:07.861 10:15:21 -- common/autotest_common.sh@950 -- # wait 72777 00:10:08.144 10:15:21 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:08.401 10:15:21 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:08.401 10:15:21 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72432 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@74 -- # wait 72432 00:10:08.659 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72432 Killed "${NVMF_APP[@]}" "$@" 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@74 -- # true 00:10:08.659 10:15:21 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:10:08.659 10:15:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:08.659 10:15:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:08.659 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:10:08.659 10:15:21 -- nvmf/common.sh@469 -- # nvmfpid=72927 00:10:08.659 10:15:21 -- nvmf/common.sh@470 -- # waitforlisten 72927 00:10:08.659 10:15:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:08.659 10:15:21 -- common/autotest_common.sh@819 -- # '[' -z 72927 ']' 00:10:08.659 10:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.660 10:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.660 10:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.660 10:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.660 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 [2024-07-26 10:15:22.040227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:08.660 [2024-07-26 10:15:22.040323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.917 [2024-07-26 10:15:22.182421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.917 [2024-07-26 10:15:22.273879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.917 [2024-07-26 10:15:22.274050] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.917 [2024-07-26 10:15:22.274089] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.917 [2024-07-26 10:15:22.274099] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.917 [2024-07-26 10:15:22.274130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.850 10:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.850 10:15:22 -- common/autotest_common.sh@852 -- # return 0 00:10:09.850 10:15:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:09.850 10:15:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:09.850 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:10:09.850 10:15:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.850 10:15:23 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:09.850 [2024-07-26 10:15:23.277778] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:09.850 [2024-07-26 10:15:23.278190] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:09.850 [2024-07-26 10:15:23.278347] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:10.108 10:15:23 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:10.108 10:15:23 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:10:10.108 10:15:23 -- common/autotest_common.sh@887 -- # local bdev_name=ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:10:10.108 10:15:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:10.108 10:15:23 -- common/autotest_common.sh@889 -- # local i 00:10:10.108 10:15:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:10.108 10:15:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:10.108 10:15:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:10.365 10:15:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 -t 2000 00:10:10.623 [ 00:10:10.623 { 00:10:10.623 "name": "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2", 00:10:10.623 "aliases": [ 00:10:10.623 "lvs/lvol" 00:10:10.623 ], 00:10:10.623 "product_name": "Logical Volume", 00:10:10.623 "block_size": 4096, 00:10:10.623 "num_blocks": 38912, 00:10:10.623 "uuid": "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2", 00:10:10.623 "assigned_rate_limits": { 00:10:10.623 "rw_ios_per_sec": 0, 00:10:10.623 "rw_mbytes_per_sec": 0, 00:10:10.623 "r_mbytes_per_sec": 0, 00:10:10.623 "w_mbytes_per_sec": 0 00:10:10.623 }, 00:10:10.623 "claimed": false, 00:10:10.623 "zoned": false, 00:10:10.623 "supported_io_types": { 00:10:10.623 "read": true, 00:10:10.623 "write": true, 00:10:10.623 "unmap": true, 00:10:10.623 "write_zeroes": true, 00:10:10.623 "flush": false, 00:10:10.623 "reset": true, 00:10:10.623 "compare": false, 00:10:10.623 "compare_and_write": false, 00:10:10.623 "abort": false, 00:10:10.623 "nvme_admin": false, 00:10:10.623 "nvme_io": false 00:10:10.623 }, 00:10:10.623 "driver_specific": { 00:10:10.623 "lvol": { 00:10:10.623 "lvol_store_uuid": "2f713cfd-dd0c-449d-a865-d4743fc6063f", 00:10:10.623 "base_bdev": "aio_bdev", 00:10:10.623 "thin_provision": false, 00:10:10.623 "snapshot": false, 00:10:10.623 "clone": false, 00:10:10.623 "esnap_clone": false 00:10:10.623 } 00:10:10.623 } 00:10:10.623 } 00:10:10.623 ] 00:10:10.623 10:15:23 -- common/autotest_common.sh@895 -- # return 0 00:10:10.623 10:15:23 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:10.623 10:15:23 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:10.882 10:15:24 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:10.882 10:15:24 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:10.882 10:15:24 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:11.141 10:15:24 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:11.141 10:15:24 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:11.141 [2024-07-26 10:15:24.559108] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:11.398 10:15:24 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:11.398 10:15:24 -- common/autotest_common.sh@640 -- # local es=0 00:10:11.398 10:15:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:11.398 10:15:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.398 10:15:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.398 10:15:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.398 10:15:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.398 10:15:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.398 10:15:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.399 10:15:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.399 10:15:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:11.399 10:15:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:11.399 request: 00:10:11.399 { 00:10:11.399 "uuid": "2f713cfd-dd0c-449d-a865-d4743fc6063f", 00:10:11.399 "method": "bdev_lvol_get_lvstores", 00:10:11.399 "req_id": 1 00:10:11.399 } 00:10:11.399 Got JSON-RPC error response 00:10:11.399 response: 00:10:11.399 { 00:10:11.399 "code": -19, 00:10:11.399 "message": "No such device" 00:10:11.399 } 00:10:11.399 10:15:24 -- common/autotest_common.sh@643 -- # es=1 00:10:11.399 10:15:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:11.399 10:15:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:11.399 10:15:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:11.399 10:15:24 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.966 aio_bdev 00:10:11.966 10:15:25 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:10:11.966 10:15:25 -- common/autotest_common.sh@887 -- # local bdev_name=ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:10:11.966 10:15:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:11.966 10:15:25 -- common/autotest_common.sh@889 -- # local i 00:10:11.966 10:15:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:11.966 10:15:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:11.966 10:15:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:11.966 10:15:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 -t 2000 00:10:12.225 [ 00:10:12.225 { 00:10:12.225 "name": "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2", 00:10:12.225 "aliases": [ 00:10:12.225 "lvs/lvol" 00:10:12.225 ], 00:10:12.225 "product_name": "Logical Volume", 00:10:12.225 "block_size": 4096, 00:10:12.225 "num_blocks": 38912, 00:10:12.225 "uuid": "ea3aea3b-c2e0-4724-a12e-093fd9cb07f2", 00:10:12.225 "assigned_rate_limits": { 00:10:12.225 "rw_ios_per_sec": 0, 00:10:12.225 "rw_mbytes_per_sec": 0, 00:10:12.225 "r_mbytes_per_sec": 0, 00:10:12.225 "w_mbytes_per_sec": 0 00:10:12.225 }, 00:10:12.225 "claimed": false, 00:10:12.225 "zoned": false, 00:10:12.225 "supported_io_types": { 00:10:12.225 "read": true, 00:10:12.225 "write": true, 00:10:12.225 "unmap": true, 00:10:12.225 "write_zeroes": true, 00:10:12.225 "flush": false, 00:10:12.225 "reset": true, 00:10:12.225 "compare": false, 00:10:12.225 "compare_and_write": false, 00:10:12.225 "abort": false, 00:10:12.225 "nvme_admin": false, 00:10:12.225 "nvme_io": false 00:10:12.225 }, 00:10:12.225 "driver_specific": { 00:10:12.225 "lvol": { 00:10:12.225 "lvol_store_uuid": "2f713cfd-dd0c-449d-a865-d4743fc6063f", 00:10:12.225 "base_bdev": "aio_bdev", 00:10:12.225 "thin_provision": false, 00:10:12.225 "snapshot": false, 00:10:12.225 "clone": false, 00:10:12.225 "esnap_clone": false 00:10:12.225 } 00:10:12.225 } 00:10:12.225 } 00:10:12.225 ] 00:10:12.225 10:15:25 -- common/autotest_common.sh@895 -- # return 0 00:10:12.225 10:15:25 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:12.225 10:15:25 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:12.483 10:15:25 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:12.483 10:15:25 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:12.483 10:15:25 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:12.742 10:15:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:12.742 10:15:26 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ea3aea3b-c2e0-4724-a12e-093fd9cb07f2 00:10:13.001 10:15:26 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f713cfd-dd0c-449d-a865-d4743fc6063f 00:10:13.259 10:15:26 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:13.517 10:15:26 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:13.776 00:10:13.776 real 0m19.952s 00:10:13.776 user 0m40.180s 00:10:13.776 sys 0m9.231s 00:10:13.776 10:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.776 10:15:26 -- common/autotest_common.sh@10 -- # set +x 00:10:13.776 ************************************ 00:10:13.776 END TEST lvs_grow_dirty 00:10:13.776 ************************************ 00:10:13.776 10:15:27 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:13.776 10:15:27 -- common/autotest_common.sh@796 -- # type=--id 00:10:13.776 10:15:27 -- common/autotest_common.sh@797 -- # id=0 00:10:13.776 10:15:27 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:10:13.776 10:15:27 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:13.776 10:15:27 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:10:13.776 10:15:27 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:10:13.776 10:15:27 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:10:13.776 10:15:27 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:13.776 nvmf_trace.0 00:10:13.776 10:15:27 -- common/autotest_common.sh@811 -- # return 0 00:10:13.776 10:15:27 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:13.776 10:15:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:13.776 10:15:27 -- nvmf/common.sh@116 -- # sync 00:10:13.776 10:15:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:13.776 10:15:27 -- nvmf/common.sh@119 -- # set +e 00:10:13.776 10:15:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:13.776 10:15:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:13.776 rmmod nvme_tcp 00:10:13.776 rmmod nvme_fabrics 00:10:14.043 rmmod nvme_keyring 00:10:14.043 10:15:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:14.043 10:15:27 -- nvmf/common.sh@123 -- # set -e 00:10:14.043 10:15:27 -- nvmf/common.sh@124 -- # return 0 00:10:14.043 10:15:27 -- nvmf/common.sh@477 -- # '[' -n 72927 ']' 00:10:14.043 10:15:27 -- nvmf/common.sh@478 -- # killprocess 72927 00:10:14.043 10:15:27 -- common/autotest_common.sh@926 -- # '[' -z 72927 ']' 00:10:14.043 10:15:27 -- common/autotest_common.sh@930 -- # kill -0 72927 00:10:14.043 10:15:27 -- common/autotest_common.sh@931 -- # uname 00:10:14.043 10:15:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:14.043 10:15:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72927 00:10:14.043 10:15:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:14.043 10:15:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:14.043 killing process with pid 72927 00:10:14.043 10:15:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72927' 00:10:14.043 10:15:27 -- common/autotest_common.sh@945 -- # kill 72927 00:10:14.043 10:15:27 -- common/autotest_common.sh@950 -- # wait 72927 00:10:14.338 10:15:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:14.338 10:15:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:14.338 10:15:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:14.338 10:15:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.338 10:15:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:14.338 10:15:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.338 10:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.338 10:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.338 10:15:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:14.338 00:10:14.338 real 0m39.946s 00:10:14.338 user 1m2.643s 00:10:14.338 sys 0m12.389s 00:10:14.338 10:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.338 10:15:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.338 ************************************ 00:10:14.338 END TEST nvmf_lvs_grow 00:10:14.338 ************************************ 00:10:14.338 10:15:27 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:14.338 10:15:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:14.338 10:15:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.338 10:15:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.338 ************************************ 00:10:14.338 START TEST nvmf_bdev_io_wait 00:10:14.338 ************************************ 00:10:14.338 10:15:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:14.338 * Looking for test storage... 00:10:14.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.338 10:15:27 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.338 10:15:27 -- nvmf/common.sh@7 -- # uname -s 00:10:14.338 10:15:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.338 10:15:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.338 10:15:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.338 10:15:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.338 10:15:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.338 10:15:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.338 10:15:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.338 10:15:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.338 10:15:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.338 10:15:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.338 10:15:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:10:14.338 10:15:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:10:14.338 10:15:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.338 10:15:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.338 10:15:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.338 10:15:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.338 10:15:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.338 10:15:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.338 10:15:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.339 10:15:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.339 10:15:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.339 10:15:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.339 10:15:27 -- paths/export.sh@5 -- # export PATH 00:10:14.339 10:15:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.339 10:15:27 -- nvmf/common.sh@46 -- # : 0 00:10:14.339 10:15:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:14.339 10:15:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:14.339 10:15:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:14.339 10:15:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.339 10:15:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.339 10:15:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:14.339 10:15:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:14.339 10:15:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:14.339 10:15:27 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.339 10:15:27 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.339 10:15:27 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:14.339 10:15:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:14.339 10:15:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.339 10:15:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:14.339 10:15:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:14.339 10:15:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:14.339 10:15:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.339 10:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.339 10:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.339 10:15:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:14.339 10:15:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:14.339 10:15:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:14.339 10:15:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:14.339 10:15:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:14.339 10:15:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:14.339 10:15:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.339 10:15:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.339 10:15:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:14.339 10:15:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:14.339 10:15:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.339 10:15:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.339 10:15:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.339 10:15:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.339 10:15:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.339 10:15:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.339 10:15:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.339 10:15:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.339 10:15:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:14.339 10:15:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:14.339 Cannot find device "nvmf_tgt_br" 00:10:14.339 10:15:27 -- nvmf/common.sh@154 -- # true 00:10:14.339 10:15:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.339 Cannot find device "nvmf_tgt_br2" 00:10:14.339 10:15:27 -- nvmf/common.sh@155 -- # true 00:10:14.339 10:15:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:14.597 10:15:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:14.597 Cannot find device "nvmf_tgt_br" 00:10:14.597 10:15:27 -- nvmf/common.sh@157 -- # true 00:10:14.597 10:15:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:14.597 Cannot find device "nvmf_tgt_br2" 00:10:14.597 10:15:27 -- nvmf/common.sh@158 -- # true 00:10:14.597 10:15:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:14.597 10:15:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:14.597 10:15:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.597 10:15:27 -- nvmf/common.sh@161 -- # true 00:10:14.598 10:15:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.598 10:15:27 -- nvmf/common.sh@162 -- # true 00:10:14.598 10:15:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.598 10:15:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.598 10:15:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.598 10:15:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.598 10:15:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.598 10:15:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.598 10:15:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.598 10:15:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:14.598 10:15:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:14.598 10:15:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:14.598 10:15:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:14.598 10:15:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:14.598 10:15:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:14.598 10:15:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.598 10:15:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.598 10:15:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.598 10:15:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:14.598 10:15:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:14.598 10:15:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.598 10:15:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.598 10:15:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.598 10:15:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.598 10:15:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.857 10:15:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:14.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:14.857 00:10:14.857 --- 10.0.0.2 ping statistics --- 00:10:14.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.857 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:14.857 10:15:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:14.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:14.857 00:10:14.857 --- 10.0.0.3 ping statistics --- 00:10:14.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.857 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:14.857 10:15:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:14.857 00:10:14.857 --- 10.0.0.1 ping statistics --- 00:10:14.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.857 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:14.857 10:15:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.857 10:15:28 -- nvmf/common.sh@421 -- # return 0 00:10:14.857 10:15:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:14.857 10:15:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.857 10:15:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:14.857 10:15:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:14.857 10:15:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.857 10:15:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:14.857 10:15:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:14.857 10:15:28 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:14.857 10:15:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:14.857 10:15:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:14.857 10:15:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.857 10:15:28 -- nvmf/common.sh@469 -- # nvmfpid=73239 00:10:14.857 10:15:28 -- nvmf/common.sh@470 -- # waitforlisten 73239 00:10:14.857 10:15:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:14.857 10:15:28 -- common/autotest_common.sh@819 -- # '[' -z 73239 ']' 00:10:14.857 10:15:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.857 10:15:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.857 10:15:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.857 10:15:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.857 10:15:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.857 [2024-07-26 10:15:28.149318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:14.857 [2024-07-26 10:15:28.150182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.857 [2024-07-26 10:15:28.291709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.115 [2024-07-26 10:15:28.388781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.115 [2024-07-26 10:15:28.389264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.115 [2024-07-26 10:15:28.389400] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.115 [2024-07-26 10:15:28.389550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.115 [2024-07-26 10:15:28.389870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.115 [2024-07-26 10:15:28.391187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.115 [2024-07-26 10:15:28.391368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.115 [2024-07-26 10:15:28.391374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.681 10:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:15.681 10:15:29 -- common/autotest_common.sh@852 -- # return 0 00:10:15.681 10:15:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:15.681 10:15:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:15.681 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.681 10:15:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.681 10:15:29 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:15.681 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.681 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 [2024-07-26 10:15:29.219665] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 Malloc0 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.941 10:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.941 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 [2024-07-26 10:15:29.286813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.941 10:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73280 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@30 -- # READ_PID=73282 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # config=() 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73284 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # local subsystem config 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:15.941 10:15:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # config=() 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:15.941 { 00:10:15.941 "params": { 00:10:15.941 "name": "Nvme$subsystem", 00:10:15.941 "trtype": "$TEST_TRANSPORT", 00:10:15.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.941 "adrfam": "ipv4", 00:10:15.941 "trsvcid": "$NVMF_PORT", 00:10:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.941 "hdgst": ${hdgst:-false}, 00:10:15.941 "ddgst": ${ddgst:-false} 00:10:15.941 }, 00:10:15.941 "method": "bdev_nvme_attach_controller" 00:10:15.941 } 00:10:15.941 EOF 00:10:15.941 )") 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # local subsystem config 00:10:15.941 10:15:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:15.941 { 00:10:15.941 "params": { 00:10:15.941 "name": "Nvme$subsystem", 00:10:15.941 "trtype": "$TEST_TRANSPORT", 00:10:15.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.941 "adrfam": "ipv4", 00:10:15.941 "trsvcid": "$NVMF_PORT", 00:10:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.941 "hdgst": ${hdgst:-false}, 00:10:15.941 "ddgst": ${ddgst:-false} 00:10:15.941 }, 00:10:15.941 "method": "bdev_nvme_attach_controller" 00:10:15.941 } 00:10:15.941 EOF 00:10:15.941 )") 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # cat 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # config=() 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # local subsystem config 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # cat 00:10:15.941 10:15:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:15.941 { 00:10:15.941 "params": { 00:10:15.941 "name": "Nvme$subsystem", 00:10:15.941 "trtype": "$TEST_TRANSPORT", 00:10:15.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.941 "adrfam": "ipv4", 00:10:15.941 "trsvcid": "$NVMF_PORT", 00:10:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.941 "hdgst": ${hdgst:-false}, 00:10:15.941 "ddgst": ${ddgst:-false} 00:10:15.941 }, 00:10:15.941 "method": "bdev_nvme_attach_controller" 00:10:15.941 } 00:10:15.941 EOF 00:10:15.941 )") 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # config=() 00:10:15.941 10:15:29 -- nvmf/common.sh@520 -- # local subsystem config 00:10:15.941 10:15:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:15.941 { 00:10:15.941 "params": { 00:10:15.941 "name": "Nvme$subsystem", 00:10:15.941 "trtype": "$TEST_TRANSPORT", 00:10:15.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.941 "adrfam": "ipv4", 00:10:15.941 "trsvcid": "$NVMF_PORT", 00:10:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.941 "hdgst": ${hdgst:-false}, 00:10:15.941 "ddgst": ${ddgst:-false} 00:10:15.941 }, 00:10:15.941 "method": "bdev_nvme_attach_controller" 00:10:15.941 } 00:10:15.941 EOF 00:10:15.941 )") 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73286 00:10:15.941 10:15:29 -- target/bdev_io_wait.sh@35 -- # sync 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # cat 00:10:15.941 10:15:29 -- nvmf/common.sh@544 -- # jq . 00:10:15.941 10:15:29 -- nvmf/common.sh@544 -- # jq . 00:10:15.941 10:15:29 -- nvmf/common.sh@545 -- # IFS=, 00:10:15.941 10:15:29 -- nvmf/common.sh@542 -- # cat 00:10:15.941 10:15:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:15.941 "params": { 00:10:15.941 "name": "Nvme1", 00:10:15.941 "trtype": "tcp", 00:10:15.941 "traddr": "10.0.0.2", 00:10:15.941 "adrfam": "ipv4", 00:10:15.941 "trsvcid": "4420", 00:10:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.941 "hdgst": false, 00:10:15.941 "ddgst": false 00:10:15.941 }, 00:10:15.941 "method": "bdev_nvme_attach_controller" 00:10:15.941 }' 00:10:15.941 10:15:29 -- nvmf/common.sh@545 -- # IFS=, 00:10:15.941 10:15:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:15.942 "params": { 00:10:15.942 "name": "Nvme1", 00:10:15.942 "trtype": "tcp", 00:10:15.942 "traddr": "10.0.0.2", 00:10:15.942 "adrfam": "ipv4", 00:10:15.942 "trsvcid": "4420", 00:10:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.942 "hdgst": false, 00:10:15.942 "ddgst": false 00:10:15.942 }, 00:10:15.942 "method": "bdev_nvme_attach_controller" 00:10:15.942 }' 00:10:15.942 10:15:29 -- nvmf/common.sh@544 -- # jq . 00:10:15.942 10:15:29 -- nvmf/common.sh@545 -- # IFS=, 00:10:15.942 10:15:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:15.942 "params": { 00:10:15.942 "name": "Nvme1", 00:10:15.942 "trtype": "tcp", 00:10:15.942 "traddr": "10.0.0.2", 00:10:15.942 "adrfam": "ipv4", 00:10:15.942 "trsvcid": "4420", 00:10:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.942 "hdgst": false, 00:10:15.942 "ddgst": false 00:10:15.942 }, 00:10:15.942 "method": "bdev_nvme_attach_controller" 00:10:15.942 }' 00:10:15.942 10:15:29 -- nvmf/common.sh@544 -- # jq . 00:10:15.942 10:15:29 -- nvmf/common.sh@545 -- # IFS=, 00:10:15.942 10:15:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:15.942 "params": { 00:10:15.942 "name": "Nvme1", 00:10:15.942 "trtype": "tcp", 00:10:15.942 "traddr": "10.0.0.2", 00:10:15.942 "adrfam": "ipv4", 00:10:15.942 "trsvcid": "4420", 00:10:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.942 "hdgst": false, 00:10:15.942 "ddgst": false 00:10:15.942 }, 00:10:15.942 "method": "bdev_nvme_attach_controller" 00:10:15.942 }' 00:10:15.942 [2024-07-26 10:15:29.338975] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:15.942 [2024-07-26 10:15:29.339236] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:15.942 [2024-07-26 10:15:29.348369] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:15.942 [2024-07-26 10:15:29.348666] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:15.942 [2024-07-26 10:15:29.363515] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:15.942 [2024-07-26 10:15:29.363619] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:15.942 [2024-07-26 10:15:29.390259] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:15.942 [2024-07-26 10:15:29.390378] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:15.942 10:15:29 -- target/bdev_io_wait.sh@37 -- # wait 73280 00:10:16.200 [2024-07-26 10:15:29.554427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.200 [2024-07-26 10:15:29.626275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.200 [2024-07-26 10:15:29.632710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.459 [2024-07-26 10:15:29.710139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.459 [2024-07-26 10:15:29.710747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.459 [2024-07-26 10:15:29.786699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.459 [2024-07-26 10:15:29.794189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.459 Running I/O for 1 seconds... 00:10:16.459 Running I/O for 1 seconds... 00:10:16.459 [2024-07-26 10:15:29.870028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:16.718 Running I/O for 1 seconds... 00:10:16.718 Running I/O for 1 seconds... 00:10:17.654 00:10:17.654 Latency(us) 00:10:17.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.654 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:17.654 Nvme1n1 : 1.00 173701.47 678.52 0.00 0.00 734.28 327.68 1154.33 00:10:17.654 =================================================================================================================== 00:10:17.654 Total : 173701.47 678.52 0.00 0.00 734.28 327.68 1154.33 00:10:17.655 00:10:17.655 Latency(us) 00:10:17.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.655 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:17.655 Nvme1n1 : 1.01 10326.78 40.34 0.00 0.00 12340.91 7745.16 20137.43 00:10:17.655 =================================================================================================================== 00:10:17.655 Total : 10326.78 40.34 0.00 0.00 12340.91 7745.16 20137.43 00:10:17.655 00:10:17.655 Latency(us) 00:10:17.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.655 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:17.655 Nvme1n1 : 1.01 7747.26 30.26 0.00 0.00 16441.32 8698.41 28597.53 00:10:17.655 =================================================================================================================== 00:10:17.655 Total : 7747.26 30.26 0.00 0.00 16441.32 8698.41 28597.53 00:10:17.655 00:10:17.655 Latency(us) 00:10:17.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.655 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:17.655 Nvme1n1 : 1.01 8184.56 31.97 0.00 0.00 15575.43 7238.75 28120.90 00:10:17.655 =================================================================================================================== 00:10:17.655 Total : 8184.56 31.97 0.00 0.00 15575.43 7238.75 28120.90 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@38 -- # wait 73282 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@39 -- # wait 73284 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@40 -- # wait 73286 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.913 10:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.913 10:15:31 -- common/autotest_common.sh@10 -- # set +x 00:10:17.913 10:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:17.913 10:15:31 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:17.913 10:15:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:17.913 10:15:31 -- nvmf/common.sh@116 -- # sync 00:10:17.913 10:15:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:17.913 10:15:31 -- nvmf/common.sh@119 -- # set +e 00:10:17.913 10:15:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:17.913 10:15:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:17.913 rmmod nvme_tcp 00:10:17.913 rmmod nvme_fabrics 00:10:17.913 rmmod nvme_keyring 00:10:17.913 10:15:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:17.913 10:15:31 -- nvmf/common.sh@123 -- # set -e 00:10:17.913 10:15:31 -- nvmf/common.sh@124 -- # return 0 00:10:17.913 10:15:31 -- nvmf/common.sh@477 -- # '[' -n 73239 ']' 00:10:17.913 10:15:31 -- nvmf/common.sh@478 -- # killprocess 73239 00:10:17.913 10:15:31 -- common/autotest_common.sh@926 -- # '[' -z 73239 ']' 00:10:17.913 10:15:31 -- common/autotest_common.sh@930 -- # kill -0 73239 00:10:17.913 10:15:31 -- common/autotest_common.sh@931 -- # uname 00:10:17.913 10:15:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.913 10:15:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73239 00:10:18.172 10:15:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:18.172 10:15:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:18.172 killing process with pid 73239 00:10:18.172 10:15:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73239' 00:10:18.172 10:15:31 -- common/autotest_common.sh@945 -- # kill 73239 00:10:18.172 10:15:31 -- common/autotest_common.sh@950 -- # wait 73239 00:10:18.172 10:15:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:18.172 10:15:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:18.172 10:15:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:18.172 10:15:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.172 10:15:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:18.172 10:15:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.172 10:15:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.172 10:15:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.172 10:15:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:18.172 00:10:18.172 real 0m3.969s 00:10:18.172 user 0m17.200s 00:10:18.172 sys 0m2.220s 00:10:18.172 10:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.172 10:15:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.172 ************************************ 00:10:18.172 END TEST nvmf_bdev_io_wait 00:10:18.172 ************************************ 00:10:18.431 10:15:31 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:18.432 10:15:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:18.432 10:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.432 10:15:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.432 ************************************ 00:10:18.432 START TEST nvmf_queue_depth 00:10:18.432 ************************************ 00:10:18.432 10:15:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:18.432 * Looking for test storage... 00:10:18.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.432 10:15:31 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.432 10:15:31 -- nvmf/common.sh@7 -- # uname -s 00:10:18.432 10:15:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.432 10:15:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.432 10:15:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.432 10:15:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.432 10:15:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.432 10:15:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.432 10:15:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.432 10:15:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.432 10:15:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.432 10:15:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:10:18.432 10:15:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:10:18.432 10:15:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.432 10:15:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.432 10:15:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.432 10:15:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.432 10:15:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.432 10:15:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.432 10:15:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.432 10:15:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.432 10:15:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.432 10:15:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.432 10:15:31 -- paths/export.sh@5 -- # export PATH 00:10:18.432 10:15:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.432 10:15:31 -- nvmf/common.sh@46 -- # : 0 00:10:18.432 10:15:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.432 10:15:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.432 10:15:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.432 10:15:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.432 10:15:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.432 10:15:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.432 10:15:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.432 10:15:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.432 10:15:31 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:18.432 10:15:31 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:18.432 10:15:31 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:18.432 10:15:31 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:18.432 10:15:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.432 10:15:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.432 10:15:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.432 10:15:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.432 10:15:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.432 10:15:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.432 10:15:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.432 10:15:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.432 10:15:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.432 10:15:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.432 10:15:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.432 10:15:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.432 10:15:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.432 10:15:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.432 10:15:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.432 10:15:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.432 10:15:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.432 10:15:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.432 10:15:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.432 10:15:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.432 10:15:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.432 10:15:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.432 10:15:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.432 10:15:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.432 Cannot find device "nvmf_tgt_br" 00:10:18.432 10:15:31 -- nvmf/common.sh@154 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.432 Cannot find device "nvmf_tgt_br2" 00:10:18.432 10:15:31 -- nvmf/common.sh@155 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.432 10:15:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.432 Cannot find device "nvmf_tgt_br" 00:10:18.432 10:15:31 -- nvmf/common.sh@157 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.432 Cannot find device "nvmf_tgt_br2" 00:10:18.432 10:15:31 -- nvmf/common.sh@158 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.432 10:15:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:18.432 10:15:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.432 10:15:31 -- nvmf/common.sh@161 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.432 10:15:31 -- nvmf/common.sh@162 -- # true 00:10:18.432 10:15:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.691 10:15:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.691 10:15:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.691 10:15:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.691 10:15:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.691 10:15:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.691 10:15:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.691 10:15:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.691 10:15:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.691 10:15:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:18.691 10:15:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:18.691 10:15:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:18.691 10:15:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:18.691 10:15:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.691 10:15:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.691 10:15:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.691 10:15:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:18.691 10:15:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:18.691 10:15:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.691 10:15:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.691 10:15:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.691 10:15:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.691 10:15:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.691 10:15:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:18.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:18.691 00:10:18.691 --- 10.0.0.2 ping statistics --- 00:10:18.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.691 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:18.691 10:15:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:18.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:18.691 00:10:18.691 --- 10.0.0.3 ping statistics --- 00:10:18.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.691 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:18.691 10:15:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:18.691 00:10:18.691 --- 10.0.0.1 ping statistics --- 00:10:18.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.691 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:18.691 10:15:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.691 10:15:32 -- nvmf/common.sh@421 -- # return 0 00:10:18.691 10:15:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:18.691 10:15:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.691 10:15:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:18.691 10:15:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:18.691 10:15:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.691 10:15:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:18.691 10:15:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:18.691 10:15:32 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:18.691 10:15:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:18.691 10:15:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:18.691 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.691 10:15:32 -- nvmf/common.sh@469 -- # nvmfpid=73509 00:10:18.691 10:15:32 -- nvmf/common.sh@470 -- # waitforlisten 73509 00:10:18.691 10:15:32 -- common/autotest_common.sh@819 -- # '[' -z 73509 ']' 00:10:18.691 10:15:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.691 10:15:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.691 10:15:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.691 10:15:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.691 10:15:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.691 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.691 [2024-07-26 10:15:32.140876] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:18.692 [2024-07-26 10:15:32.140983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.949 [2024-07-26 10:15:32.283884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.949 [2024-07-26 10:15:32.371163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.949 [2024-07-26 10:15:32.371347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.949 [2024-07-26 10:15:32.371361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.949 [2024-07-26 10:15:32.371370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.949 [2024-07-26 10:15:32.371395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.881 10:15:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.881 10:15:33 -- common/autotest_common.sh@852 -- # return 0 00:10:19.881 10:15:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:19.881 10:15:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 10:15:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.881 10:15:33 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.881 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 [2024-07-26 10:15:33.139284] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.881 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.881 10:15:33 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:19.881 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 Malloc0 00:10:19.881 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.881 10:15:33 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:19.881 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.881 10:15:33 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.881 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.881 10:15:33 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.881 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 [2024-07-26 10:15:33.199218] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.881 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.881 10:15:33 -- target/queue_depth.sh@30 -- # bdevperf_pid=73541 00:10:19.881 10:15:33 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:19.881 10:15:33 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.881 10:15:33 -- target/queue_depth.sh@33 -- # waitforlisten 73541 /var/tmp/bdevperf.sock 00:10:19.881 10:15:33 -- common/autotest_common.sh@819 -- # '[' -z 73541 ']' 00:10:19.881 10:15:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:19.881 10:15:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:19.881 10:15:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:19.881 10:15:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.881 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.881 [2024-07-26 10:15:33.254896] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:19.881 [2024-07-26 10:15:33.254998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73541 ] 00:10:20.139 [2024-07-26 10:15:33.393776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.139 [2024-07-26 10:15:33.489348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.745 10:15:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.745 10:15:34 -- common/autotest_common.sh@852 -- # return 0 00:10:20.745 10:15:34 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:20.745 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.745 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:10:21.002 NVMe0n1 00:10:21.002 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.002 10:15:34 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:21.002 Running I/O for 10 seconds... 00:10:33.204 00:10:33.204 Latency(us) 00:10:33.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.204 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:33.204 Verification LBA range: start 0x0 length 0x4000 00:10:33.204 NVMe0n1 : 10.07 13901.47 54.30 0.00 0.00 73353.23 16443.58 56480.12 00:10:33.204 =================================================================================================================== 00:10:33.204 Total : 13901.47 54.30 0.00 0.00 73353.23 16443.58 56480.12 00:10:33.204 0 00:10:33.204 10:15:44 -- target/queue_depth.sh@39 -- # killprocess 73541 00:10:33.204 10:15:44 -- common/autotest_common.sh@926 -- # '[' -z 73541 ']' 00:10:33.204 10:15:44 -- common/autotest_common.sh@930 -- # kill -0 73541 00:10:33.204 10:15:44 -- common/autotest_common.sh@931 -- # uname 00:10:33.204 10:15:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:33.204 10:15:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73541 00:10:33.204 10:15:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:33.204 10:15:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:33.204 killing process with pid 73541 00:10:33.204 10:15:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73541' 00:10:33.204 10:15:44 -- common/autotest_common.sh@945 -- # kill 73541 00:10:33.204 Received shutdown signal, test time was about 10.000000 seconds 00:10:33.204 00:10:33.204 Latency(us) 00:10:33.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.204 =================================================================================================================== 00:10:33.204 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:33.204 10:15:44 -- common/autotest_common.sh@950 -- # wait 73541 00:10:33.204 10:15:44 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:33.204 10:15:44 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:33.204 10:15:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:33.204 10:15:44 -- nvmf/common.sh@116 -- # sync 00:10:33.204 10:15:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:33.204 10:15:44 -- nvmf/common.sh@119 -- # set +e 00:10:33.204 10:15:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:33.204 10:15:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:33.204 rmmod nvme_tcp 00:10:33.204 rmmod nvme_fabrics 00:10:33.204 rmmod nvme_keyring 00:10:33.204 10:15:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:33.204 10:15:44 -- nvmf/common.sh@123 -- # set -e 00:10:33.204 10:15:44 -- nvmf/common.sh@124 -- # return 0 00:10:33.204 10:15:44 -- nvmf/common.sh@477 -- # '[' -n 73509 ']' 00:10:33.204 10:15:44 -- nvmf/common.sh@478 -- # killprocess 73509 00:10:33.204 10:15:44 -- common/autotest_common.sh@926 -- # '[' -z 73509 ']' 00:10:33.204 10:15:44 -- common/autotest_common.sh@930 -- # kill -0 73509 00:10:33.204 10:15:44 -- common/autotest_common.sh@931 -- # uname 00:10:33.204 10:15:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:33.204 10:15:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73509 00:10:33.204 10:15:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:33.205 10:15:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:33.205 killing process with pid 73509 00:10:33.205 10:15:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73509' 00:10:33.205 10:15:44 -- common/autotest_common.sh@945 -- # kill 73509 00:10:33.205 10:15:44 -- common/autotest_common.sh@950 -- # wait 73509 00:10:33.205 10:15:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:33.205 10:15:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:33.205 10:15:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:33.205 10:15:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.205 10:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.205 10:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.205 10:15:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:33.205 00:10:33.205 real 0m13.459s 00:10:33.205 user 0m23.493s 00:10:33.205 sys 0m1.961s 00:10:33.205 10:15:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.205 10:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:33.205 ************************************ 00:10:33.205 END TEST nvmf_queue_depth 00:10:33.205 ************************************ 00:10:33.205 10:15:45 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:33.205 10:15:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:33.205 10:15:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.205 10:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:33.205 ************************************ 00:10:33.205 START TEST nvmf_multipath 00:10:33.205 ************************************ 00:10:33.205 10:15:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:33.205 * Looking for test storage... 00:10:33.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:33.205 10:15:45 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:33.205 10:15:45 -- nvmf/common.sh@7 -- # uname -s 00:10:33.205 10:15:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.205 10:15:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.205 10:15:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.205 10:15:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.205 10:15:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.205 10:15:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.205 10:15:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.205 10:15:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.205 10:15:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.205 10:15:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:10:33.205 10:15:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:10:33.205 10:15:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.205 10:15:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.205 10:15:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:33.205 10:15:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.205 10:15:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.205 10:15:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.205 10:15:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.205 10:15:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.205 10:15:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.205 10:15:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.205 10:15:45 -- paths/export.sh@5 -- # export PATH 00:10:33.205 10:15:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.205 10:15:45 -- nvmf/common.sh@46 -- # : 0 00:10:33.205 10:15:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:33.205 10:15:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:33.205 10:15:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:33.205 10:15:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.205 10:15:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.205 10:15:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:33.205 10:15:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:33.205 10:15:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:33.205 10:15:45 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.205 10:15:45 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.205 10:15:45 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:33.205 10:15:45 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.205 10:15:45 -- target/multipath.sh@43 -- # nvmftestinit 00:10:33.205 10:15:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:33.205 10:15:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.205 10:15:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:33.205 10:15:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:33.205 10:15:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:33.205 10:15:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.205 10:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.205 10:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.205 10:15:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:33.205 10:15:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:33.205 10:15:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.205 10:15:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.205 10:15:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:33.205 10:15:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:33.205 10:15:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:33.205 10:15:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:33.205 10:15:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:33.205 10:15:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.205 10:15:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:33.205 10:15:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:33.205 10:15:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:33.205 10:15:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:33.205 10:15:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:33.205 10:15:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:33.205 Cannot find device "nvmf_tgt_br" 00:10:33.205 10:15:45 -- nvmf/common.sh@154 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:33.205 Cannot find device "nvmf_tgt_br2" 00:10:33.205 10:15:45 -- nvmf/common.sh@155 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:33.205 10:15:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:33.205 Cannot find device "nvmf_tgt_br" 00:10:33.205 10:15:45 -- nvmf/common.sh@157 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:33.205 Cannot find device "nvmf_tgt_br2" 00:10:33.205 10:15:45 -- nvmf/common.sh@158 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:33.205 10:15:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:33.205 10:15:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.205 10:15:45 -- nvmf/common.sh@161 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.205 10:15:45 -- nvmf/common.sh@162 -- # true 00:10:33.205 10:15:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.205 10:15:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.205 10:15:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.205 10:15:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.205 10:15:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.205 10:15:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.205 10:15:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.205 10:15:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:33.205 10:15:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:33.205 10:15:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:33.206 10:15:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:33.206 10:15:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:33.206 10:15:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:33.206 10:15:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.206 10:15:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.206 10:15:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.206 10:15:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:33.206 10:15:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:33.206 10:15:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.206 10:15:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.206 10:15:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.206 10:15:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.206 10:15:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.206 10:15:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:33.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:33.206 00:10:33.206 --- 10.0.0.2 ping statistics --- 00:10:33.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.206 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:33.206 10:15:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:33.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:10:33.206 00:10:33.206 --- 10.0.0.3 ping statistics --- 00:10:33.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.206 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:33.206 10:15:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:33.206 00:10:33.206 --- 10.0.0.1 ping statistics --- 00:10:33.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.206 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:33.206 10:15:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.206 10:15:45 -- nvmf/common.sh@421 -- # return 0 00:10:33.206 10:15:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:33.206 10:15:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.206 10:15:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:33.206 10:15:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:33.206 10:15:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.206 10:15:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:33.206 10:15:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:33.206 10:15:45 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:33.206 10:15:45 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:33.206 10:15:45 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:33.206 10:15:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:33.206 10:15:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:33.206 10:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:33.206 10:15:45 -- nvmf/common.sh@469 -- # nvmfpid=73871 00:10:33.206 10:15:45 -- nvmf/common.sh@470 -- # waitforlisten 73871 00:10:33.206 10:15:45 -- common/autotest_common.sh@819 -- # '[' -z 73871 ']' 00:10:33.206 10:15:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.206 10:15:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.206 10:15:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:33.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.206 10:15:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.206 10:15:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:33.206 10:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:33.206 [2024-07-26 10:15:45.633129] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:33.206 [2024-07-26 10:15:45.633223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.206 [2024-07-26 10:15:45.778121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.206 [2024-07-26 10:15:45.873273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:33.206 [2024-07-26 10:15:45.873472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.206 [2024-07-26 10:15:45.873489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.206 [2024-07-26 10:15:45.873500] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.206 [2024-07-26 10:15:45.873898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.206 [2024-07-26 10:15:45.874187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.206 [2024-07-26 10:15:45.874333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.206 [2024-07-26 10:15:45.874340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.206 10:15:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:33.206 10:15:46 -- common/autotest_common.sh@852 -- # return 0 00:10:33.206 10:15:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.206 10:15:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:33.206 10:15:46 -- common/autotest_common.sh@10 -- # set +x 00:10:33.206 10:15:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.206 10:15:46 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.465 [2024-07-26 10:15:46.905175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.724 10:15:46 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:33.983 Malloc0 00:10:33.983 10:15:47 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:34.241 10:15:47 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.241 10:15:47 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.500 [2024-07-26 10:15:47.867859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.500 10:15:47 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:34.758 [2024-07-26 10:15:48.080070] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:34.758 10:15:48 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:35.017 10:15:48 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:35.017 10:15:48 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:35.017 10:15:48 -- common/autotest_common.sh@1177 -- # local i=0 00:10:35.017 10:15:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.017 10:15:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:35.017 10:15:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:36.918 10:15:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:36.918 10:15:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:36.918 10:15:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.176 10:15:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:37.176 10:15:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.176 10:15:50 -- common/autotest_common.sh@1187 -- # return 0 00:10:37.176 10:15:50 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:37.176 10:15:50 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:37.176 10:15:50 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:37.176 10:15:50 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:37.176 10:15:50 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:37.176 10:15:50 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:37.176 10:15:50 -- target/multipath.sh@38 -- # return 0 00:10:37.176 10:15:50 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:37.176 10:15:50 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:37.176 10:15:50 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:37.176 10:15:50 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:37.176 10:15:50 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:37.176 10:15:50 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:37.176 10:15:50 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:37.176 10:15:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:37.176 10:15:50 -- target/multipath.sh@22 -- # local timeout=20 00:10:37.176 10:15:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:37.176 10:15:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:37.176 10:15:50 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:37.176 10:15:50 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:37.176 10:15:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:37.176 10:15:50 -- target/multipath.sh@22 -- # local timeout=20 00:10:37.176 10:15:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:37.176 10:15:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:37.176 10:15:50 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:37.176 10:15:50 -- target/multipath.sh@85 -- # echo numa 00:10:37.176 10:15:50 -- target/multipath.sh@88 -- # fio_pid=73955 00:10:37.176 10:15:50 -- target/multipath.sh@90 -- # sleep 1 00:10:37.176 10:15:50 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:37.176 [global] 00:10:37.176 thread=1 00:10:37.176 invalidate=1 00:10:37.176 rw=randrw 00:10:37.176 time_based=1 00:10:37.176 runtime=6 00:10:37.176 ioengine=libaio 00:10:37.176 direct=1 00:10:37.176 bs=4096 00:10:37.177 iodepth=128 00:10:37.177 norandommap=0 00:10:37.177 numjobs=1 00:10:37.177 00:10:37.177 verify_dump=1 00:10:37.177 verify_backlog=512 00:10:37.177 verify_state_save=0 00:10:37.177 do_verify=1 00:10:37.177 verify=crc32c-intel 00:10:37.177 [job0] 00:10:37.177 filename=/dev/nvme0n1 00:10:37.177 Could not set queue depth (nvme0n1) 00:10:37.177 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.177 fio-3.35 00:10:37.177 Starting 1 thread 00:10:38.111 10:15:51 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:38.370 10:15:51 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:38.629 10:15:51 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:38.629 10:15:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:38.629 10:15:51 -- target/multipath.sh@22 -- # local timeout=20 00:10:38.629 10:15:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:38.629 10:15:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:38.629 10:15:51 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:38.629 10:15:51 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:38.629 10:15:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:38.629 10:15:51 -- target/multipath.sh@22 -- # local timeout=20 00:10:38.629 10:15:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:38.629 10:15:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.629 10:15:51 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:38.629 10:15:51 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:38.888 10:15:52 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:39.147 10:15:52 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:39.147 10:15:52 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:39.147 10:15:52 -- target/multipath.sh@22 -- # local timeout=20 00:10:39.147 10:15:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:39.147 10:15:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:39.147 10:15:52 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:39.147 10:15:52 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:39.147 10:15:52 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:39.147 10:15:52 -- target/multipath.sh@22 -- # local timeout=20 00:10:39.147 10:15:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:39.147 10:15:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.147 10:15:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:39.147 10:15:52 -- target/multipath.sh@104 -- # wait 73955 00:10:43.336 00:10:43.336 job0: (groupid=0, jobs=1): err= 0: pid=73982: Fri Jul 26 10:15:56 2024 00:10:43.336 read: IOPS=11.3k, BW=44.0MiB/s (46.2MB/s)(264MiB/6005msec) 00:10:43.336 slat (usec): min=6, max=7360, avg=51.87, stdev=225.58 00:10:43.336 clat (usec): min=1025, max=16945, avg=7745.31, stdev=1424.67 00:10:43.336 lat (usec): min=1036, max=16966, avg=7797.19, stdev=1430.07 00:10:43.336 clat percentiles (usec): 00:10:43.336 | 1.00th=[ 4080], 5.00th=[ 5669], 10.00th=[ 6456], 20.00th=[ 6980], 00:10:43.336 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:10:43.336 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[11207], 00:10:43.336 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12780], 99.95th=[13173], 00:10:43.336 | 99.99th=[14091] 00:10:43.336 bw ( KiB/s): min=13848, max=28496, per=51.96%, avg=23421.33, stdev=4945.51, samples=12 00:10:43.336 iops : min= 3462, max= 7124, avg=5855.33, stdev=1236.38, samples=12 00:10:43.336 write: IOPS=6419, BW=25.1MiB/s (26.3MB/s)(137MiB/5472msec); 0 zone resets 00:10:43.336 slat (usec): min=14, max=3482, avg=60.76, stdev=146.93 00:10:43.336 clat (usec): min=2589, max=14163, avg=6753.85, stdev=1281.98 00:10:43.336 lat (usec): min=2634, max=14193, avg=6814.61, stdev=1286.66 00:10:43.336 clat percentiles (usec): 00:10:43.336 | 1.00th=[ 3195], 5.00th=[ 3982], 10.00th=[ 4883], 20.00th=[ 6194], 00:10:43.336 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7111], 00:10:43.336 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8094], 00:10:43.336 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12387], 99.95th=[12649], 00:10:43.336 | 99.99th=[13435] 00:10:43.336 bw ( KiB/s): min=14056, max=28128, per=91.01%, avg=23369.33, stdev=4620.84, samples=12 00:10:43.336 iops : min= 3514, max= 7032, avg=5842.33, stdev=1155.21, samples=12 00:10:43.336 lat (msec) : 2=0.01%, 4=2.31%, 10=91.69%, 20=5.99% 00:10:43.336 cpu : usr=5.56%, sys=23.01%, ctx=5920, majf=0, minf=84 00:10:43.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.336 issued rwts: total=67674,35126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.336 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.336 00:10:43.336 Run status group 0 (all jobs): 00:10:43.336 READ: bw=44.0MiB/s (46.2MB/s), 44.0MiB/s-44.0MiB/s (46.2MB/s-46.2MB/s), io=264MiB (277MB), run=6005-6005msec 00:10:43.336 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=137MiB (144MB), run=5472-5472msec 00:10:43.336 00:10:43.336 Disk stats (read/write): 00:10:43.336 nvme0n1: ios=66748/34460, merge=0/0, ticks=492970/217112, in_queue=710082, util=98.53% 00:10:43.336 10:15:56 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:43.595 10:15:56 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:43.854 10:15:57 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:43.854 10:15:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:43.854 10:15:57 -- target/multipath.sh@22 -- # local timeout=20 00:10:43.854 10:15:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:43.854 10:15:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:43.854 10:15:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:43.854 10:15:57 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:43.854 10:15:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:43.854 10:15:57 -- target/multipath.sh@22 -- # local timeout=20 00:10:43.854 10:15:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:43.854 10:15:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.854 10:15:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:43.854 10:15:57 -- target/multipath.sh@113 -- # echo round-robin 00:10:43.854 10:15:57 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:43.854 10:15:57 -- target/multipath.sh@116 -- # fio_pid=74058 00:10:43.854 10:15:57 -- target/multipath.sh@118 -- # sleep 1 00:10:43.854 [global] 00:10:43.854 thread=1 00:10:43.854 invalidate=1 00:10:43.854 rw=randrw 00:10:43.854 time_based=1 00:10:43.854 runtime=6 00:10:43.854 ioengine=libaio 00:10:43.854 direct=1 00:10:43.854 bs=4096 00:10:43.854 iodepth=128 00:10:43.854 norandommap=0 00:10:43.854 numjobs=1 00:10:43.854 00:10:43.854 verify_dump=1 00:10:43.854 verify_backlog=512 00:10:43.854 verify_state_save=0 00:10:43.854 do_verify=1 00:10:43.854 verify=crc32c-intel 00:10:43.854 [job0] 00:10:43.854 filename=/dev/nvme0n1 00:10:43.854 Could not set queue depth (nvme0n1) 00:10:44.113 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.113 fio-3.35 00:10:44.113 Starting 1 thread 00:10:45.049 10:15:58 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:45.049 10:15:58 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:45.308 10:15:58 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:45.308 10:15:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:45.308 10:15:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:45.308 10:15:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:45.308 10:15:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:45.308 10:15:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:45.308 10:15:58 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:45.308 10:15:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:45.308 10:15:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:45.308 10:15:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:45.308 10:15:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:45.308 10:15:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:45.308 10:15:58 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:45.568 10:15:58 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:45.826 10:15:59 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:45.826 10:15:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:45.826 10:15:59 -- target/multipath.sh@22 -- # local timeout=20 00:10:45.826 10:15:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:45.826 10:15:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:45.826 10:15:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:45.826 10:15:59 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:45.826 10:15:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:45.826 10:15:59 -- target/multipath.sh@22 -- # local timeout=20 00:10:45.826 10:15:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:45.826 10:15:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:45.826 10:15:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:45.826 10:15:59 -- target/multipath.sh@132 -- # wait 74058 00:10:51.098 00:10:51.098 job0: (groupid=0, jobs=1): err= 0: pid=74083: Fri Jul 26 10:16:03 2024 00:10:51.098 read: IOPS=12.5k, BW=48.8MiB/s (51.2MB/s)(293MiB/6006msec) 00:10:51.098 slat (usec): min=4, max=7429, avg=40.66, stdev=194.49 00:10:51.098 clat (usec): min=269, max=15346, avg=7121.72, stdev=1832.20 00:10:51.098 lat (usec): min=280, max=15382, avg=7162.38, stdev=1846.28 00:10:51.098 clat percentiles (usec): 00:10:51.098 | 1.00th=[ 2933], 5.00th=[ 4015], 10.00th=[ 4686], 20.00th=[ 5538], 00:10:51.098 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:51.098 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[10814], 00:10:51.098 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12911], 99.95th=[13435], 00:10:51.098 | 99.99th=[13960] 00:10:51.098 bw ( KiB/s): min=13128, max=39744, per=50.25%, avg=25121.33, stdev=7313.39, samples=12 00:10:51.098 iops : min= 3282, max= 9936, avg=6280.33, stdev=1828.35, samples=12 00:10:51.098 write: IOPS=6956, BW=27.2MiB/s (28.5MB/s)(147MiB/5419msec); 0 zone resets 00:10:51.098 slat (usec): min=12, max=3257, avg=51.42, stdev=128.48 00:10:51.098 clat (usec): min=1347, max=13359, avg=6042.85, stdev=1707.41 00:10:51.098 lat (usec): min=1369, max=13383, avg=6094.27, stdev=1721.39 00:10:51.098 clat percentiles (usec): 00:10:51.098 | 1.00th=[ 2573], 5.00th=[ 3163], 10.00th=[ 3523], 20.00th=[ 4178], 00:10:51.098 | 30.00th=[ 4883], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 6915], 00:10:51.098 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 8029], 00:10:51.098 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12125], 99.95th=[12518], 00:10:51.098 | 99.99th=[13304] 00:10:51.098 bw ( KiB/s): min=13472, max=38904, per=90.15%, avg=25085.33, stdev=7119.95, samples=12 00:10:51.098 iops : min= 3368, max= 9726, avg=6271.33, stdev=1779.99, samples=12 00:10:51.098 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:51.098 lat (msec) : 2=0.18%, 4=8.71%, 10=86.26%, 20=4.82% 00:10:51.098 cpu : usr=5.79%, sys=24.46%, ctx=6078, majf=0, minf=60 00:10:51.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:51.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.098 issued rwts: total=75061,37698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.098 00:10:51.098 Run status group 0 (all jobs): 00:10:51.098 READ: bw=48.8MiB/s (51.2MB/s), 48.8MiB/s-48.8MiB/s (51.2MB/s-51.2MB/s), io=293MiB (307MB), run=6006-6006msec 00:10:51.098 WRITE: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=147MiB (154MB), run=5419-5419msec 00:10:51.098 00:10:51.098 Disk stats (read/write): 00:10:51.098 nvme0n1: ios=74234/37027, merge=0/0, ticks=502188/206954, in_queue=709142, util=98.66% 00:10:51.098 10:16:03 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:51.098 10:16:03 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.098 10:16:03 -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.098 10:16:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:51.098 10:16:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.098 10:16:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:51.098 10:16:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.098 10:16:03 -- common/autotest_common.sh@1210 -- # return 0 00:10:51.098 10:16:03 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.098 10:16:03 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:51.098 10:16:03 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:51.098 10:16:03 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:51.098 10:16:03 -- target/multipath.sh@144 -- # nvmftestfini 00:10:51.098 10:16:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:51.098 10:16:03 -- nvmf/common.sh@116 -- # sync 00:10:51.098 10:16:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:51.098 10:16:03 -- nvmf/common.sh@119 -- # set +e 00:10:51.098 10:16:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:51.098 10:16:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:51.098 rmmod nvme_tcp 00:10:51.098 rmmod nvme_fabrics 00:10:51.098 rmmod nvme_keyring 00:10:51.098 10:16:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:51.098 10:16:04 -- nvmf/common.sh@123 -- # set -e 00:10:51.098 10:16:04 -- nvmf/common.sh@124 -- # return 0 00:10:51.098 10:16:04 -- nvmf/common.sh@477 -- # '[' -n 73871 ']' 00:10:51.098 10:16:04 -- nvmf/common.sh@478 -- # killprocess 73871 00:10:51.098 10:16:04 -- common/autotest_common.sh@926 -- # '[' -z 73871 ']' 00:10:51.098 10:16:04 -- common/autotest_common.sh@930 -- # kill -0 73871 00:10:51.098 10:16:04 -- common/autotest_common.sh@931 -- # uname 00:10:51.098 10:16:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:51.098 10:16:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73871 00:10:51.098 killing process with pid 73871 00:10:51.098 10:16:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:51.098 10:16:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:51.098 10:16:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73871' 00:10:51.098 10:16:04 -- common/autotest_common.sh@945 -- # kill 73871 00:10:51.098 10:16:04 -- common/autotest_common.sh@950 -- # wait 73871 00:10:51.098 10:16:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:51.098 10:16:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:51.098 10:16:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:51.098 10:16:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.098 10:16:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:51.098 10:16:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.098 10:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.098 10:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.098 10:16:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:51.098 00:10:51.098 real 0m19.186s 00:10:51.098 user 1m12.259s 00:10:51.098 sys 0m9.508s 00:10:51.098 10:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.098 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 ************************************ 00:10:51.098 END TEST nvmf_multipath 00:10:51.098 ************************************ 00:10:51.098 10:16:04 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:51.098 10:16:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:51.098 10:16:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.098 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 ************************************ 00:10:51.098 START TEST nvmf_zcopy 00:10:51.098 ************************************ 00:10:51.098 10:16:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:51.098 * Looking for test storage... 00:10:51.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.098 10:16:04 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.098 10:16:04 -- nvmf/common.sh@7 -- # uname -s 00:10:51.098 10:16:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.098 10:16:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.098 10:16:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.099 10:16:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.099 10:16:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.099 10:16:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.099 10:16:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.099 10:16:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.099 10:16:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.099 10:16:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.099 10:16:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:10:51.099 10:16:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:10:51.099 10:16:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.099 10:16:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.099 10:16:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.099 10:16:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.099 10:16:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.099 10:16:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.099 10:16:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.099 10:16:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.099 10:16:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.099 10:16:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.099 10:16:04 -- paths/export.sh@5 -- # export PATH 00:10:51.099 10:16:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.099 10:16:04 -- nvmf/common.sh@46 -- # : 0 00:10:51.099 10:16:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:51.099 10:16:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:51.099 10:16:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:51.099 10:16:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.099 10:16:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.099 10:16:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:51.099 10:16:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:51.099 10:16:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:51.099 10:16:04 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:51.099 10:16:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:51.099 10:16:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.099 10:16:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:51.099 10:16:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:51.099 10:16:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:51.099 10:16:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.099 10:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.099 10:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.099 10:16:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:51.099 10:16:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:51.099 10:16:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:51.099 10:16:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:51.099 10:16:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:51.099 10:16:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:51.099 10:16:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.099 10:16:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.099 10:16:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:51.099 10:16:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:51.099 10:16:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.099 10:16:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.099 10:16:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.099 10:16:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.099 10:16:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.099 10:16:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.099 10:16:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.099 10:16:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.099 10:16:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:51.099 10:16:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:51.099 Cannot find device "nvmf_tgt_br" 00:10:51.099 10:16:04 -- nvmf/common.sh@154 -- # true 00:10:51.099 10:16:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.099 Cannot find device "nvmf_tgt_br2" 00:10:51.099 10:16:04 -- nvmf/common.sh@155 -- # true 00:10:51.099 10:16:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:51.099 10:16:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:51.099 Cannot find device "nvmf_tgt_br" 00:10:51.099 10:16:04 -- nvmf/common.sh@157 -- # true 00:10:51.099 10:16:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:51.099 Cannot find device "nvmf_tgt_br2" 00:10:51.358 10:16:04 -- nvmf/common.sh@158 -- # true 00:10:51.358 10:16:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:51.358 10:16:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:51.358 10:16:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.358 10:16:04 -- nvmf/common.sh@161 -- # true 00:10:51.358 10:16:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.358 10:16:04 -- nvmf/common.sh@162 -- # true 00:10:51.358 10:16:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.358 10:16:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.358 10:16:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.358 10:16:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.358 10:16:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.358 10:16:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.358 10:16:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.358 10:16:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:51.358 10:16:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:51.358 10:16:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:51.358 10:16:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:51.358 10:16:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:51.358 10:16:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:51.358 10:16:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.358 10:16:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.358 10:16:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.358 10:16:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:51.358 10:16:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:51.358 10:16:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.358 10:16:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.358 10:16:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.358 10:16:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.358 10:16:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.358 10:16:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:51.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:10:51.358 00:10:51.358 --- 10.0.0.2 ping statistics --- 00:10:51.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.358 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:51.358 10:16:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:51.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:10:51.358 00:10:51.358 --- 10.0.0.3 ping statistics --- 00:10:51.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.358 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:51.358 10:16:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:51.358 00:10:51.358 --- 10.0.0.1 ping statistics --- 00:10:51.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.358 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:51.358 10:16:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.358 10:16:04 -- nvmf/common.sh@421 -- # return 0 00:10:51.358 10:16:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:51.358 10:16:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.358 10:16:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:51.358 10:16:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:51.358 10:16:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.358 10:16:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:51.358 10:16:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:51.616 10:16:04 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:51.616 10:16:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:51.616 10:16:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:51.616 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.616 10:16:04 -- nvmf/common.sh@469 -- # nvmfpid=74335 00:10:51.616 10:16:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.616 10:16:04 -- nvmf/common.sh@470 -- # waitforlisten 74335 00:10:51.616 10:16:04 -- common/autotest_common.sh@819 -- # '[' -z 74335 ']' 00:10:51.616 10:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.616 10:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:51.616 10:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.616 10:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:51.616 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.616 [2024-07-26 10:16:04.878223] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:51.616 [2024-07-26 10:16:04.878318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.616 [2024-07-26 10:16:05.014929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.874 [2024-07-26 10:16:05.097909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.874 [2024-07-26 10:16:05.098071] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.874 [2024-07-26 10:16:05.098084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.874 [2024-07-26 10:16:05.098092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.874 [2024-07-26 10:16:05.098126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.479 10:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:52.479 10:16:05 -- common/autotest_common.sh@852 -- # return 0 00:10:52.479 10:16:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:52.479 10:16:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 10:16:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.479 10:16:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:52.479 10:16:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 [2024-07-26 10:16:05.870516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 [2024-07-26 10:16:05.886659] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 malloc0 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:52.479 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.479 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.479 10:16:05 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:52.479 10:16:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:52.479 10:16:05 -- nvmf/common.sh@520 -- # config=() 00:10:52.479 10:16:05 -- nvmf/common.sh@520 -- # local subsystem config 00:10:52.479 10:16:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:52.479 10:16:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:52.479 { 00:10:52.479 "params": { 00:10:52.479 "name": "Nvme$subsystem", 00:10:52.479 "trtype": "$TEST_TRANSPORT", 00:10:52.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.479 "adrfam": "ipv4", 00:10:52.479 "trsvcid": "$NVMF_PORT", 00:10:52.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.479 "hdgst": ${hdgst:-false}, 00:10:52.479 "ddgst": ${ddgst:-false} 00:10:52.479 }, 00:10:52.479 "method": "bdev_nvme_attach_controller" 00:10:52.479 } 00:10:52.479 EOF 00:10:52.479 )") 00:10:52.479 10:16:05 -- nvmf/common.sh@542 -- # cat 00:10:52.737 10:16:05 -- nvmf/common.sh@544 -- # jq . 00:10:52.737 10:16:05 -- nvmf/common.sh@545 -- # IFS=, 00:10:52.737 10:16:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:52.737 "params": { 00:10:52.737 "name": "Nvme1", 00:10:52.737 "trtype": "tcp", 00:10:52.737 "traddr": "10.0.0.2", 00:10:52.737 "adrfam": "ipv4", 00:10:52.737 "trsvcid": "4420", 00:10:52.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.737 "hdgst": false, 00:10:52.737 "ddgst": false 00:10:52.737 }, 00:10:52.737 "method": "bdev_nvme_attach_controller" 00:10:52.737 }' 00:10:52.737 [2024-07-26 10:16:05.973443] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:52.737 [2024-07-26 10:16:05.973564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74368 ] 00:10:52.737 [2024-07-26 10:16:06.113688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.995 [2024-07-26 10:16:06.204826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.995 Running I/O for 10 seconds... 00:11:03.000 00:11:03.000 Latency(us) 00:11:03.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.000 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:03.000 Verification LBA range: start 0x0 length 0x1000 00:11:03.000 Nvme1n1 : 10.01 9278.39 72.49 0.00 0.00 13760.18 1571.37 22043.93 00:11:03.000 =================================================================================================================== 00:11:03.000 Total : 9278.39 72.49 0.00 0.00 13760.18 1571.37 22043.93 00:11:03.258 10:16:16 -- target/zcopy.sh@39 -- # perfpid=74480 00:11:03.258 10:16:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:03.258 10:16:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:03.258 10:16:16 -- nvmf/common.sh@520 -- # config=() 00:11:03.258 10:16:16 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:03.258 10:16:16 -- nvmf/common.sh@520 -- # local subsystem config 00:11:03.258 10:16:16 -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 10:16:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:03.258 10:16:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:03.258 { 00:11:03.258 "params": { 00:11:03.258 "name": "Nvme$subsystem", 00:11:03.258 "trtype": "$TEST_TRANSPORT", 00:11:03.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:03.259 "adrfam": "ipv4", 00:11:03.259 "trsvcid": "$NVMF_PORT", 00:11:03.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:03.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:03.259 "hdgst": ${hdgst:-false}, 00:11:03.259 "ddgst": ${ddgst:-false} 00:11:03.259 }, 00:11:03.259 "method": "bdev_nvme_attach_controller" 00:11:03.259 } 00:11:03.259 EOF 00:11:03.259 )") 00:11:03.259 10:16:16 -- nvmf/common.sh@542 -- # cat 00:11:03.259 [2024-07-26 10:16:16.606200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.606258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 10:16:16 -- nvmf/common.sh@544 -- # jq . 00:11:03.259 10:16:16 -- nvmf/common.sh@545 -- # IFS=, 00:11:03.259 10:16:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:03.259 "params": { 00:11:03.259 "name": "Nvme1", 00:11:03.259 "trtype": "tcp", 00:11:03.259 "traddr": "10.0.0.2", 00:11:03.259 "adrfam": "ipv4", 00:11:03.259 "trsvcid": "4420", 00:11:03.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:03.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:03.259 "hdgst": false, 00:11:03.259 "ddgst": false 00:11:03.259 }, 00:11:03.259 "method": "bdev_nvme_attach_controller" 00:11:03.259 }' 00:11:03.259 [2024-07-26 10:16:16.618150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.618176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.626150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.626177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.634152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.634178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.637161] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:03.259 [2024-07-26 10:16:16.637227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74480 ] 00:11:03.259 [2024-07-26 10:16:16.642153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.642177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.650156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.650182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.658157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.658183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.666152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.666192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.674164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.674190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.682173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.682200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.690169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.690195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.698170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.698196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.706174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.706199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-07-26 10:16:16.714174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-07-26 10:16:16.714199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.722174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.722200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.730178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.730203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.738182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.738208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.746187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.746209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.754189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.754213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.517 [2024-07-26 10:16:16.762191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.517 [2024-07-26 10:16:16.762215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.770199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.770223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.770446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.518 [2024-07-26 10:16:16.778211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.778241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.786204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.786233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.794198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.794223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.802202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.802227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.810207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.810234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.818225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.818250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.826220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.826265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.834212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.834237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.842236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.842263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.850220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.850264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.858239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.858287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.861034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.518 [2024-07-26 10:16:16.866223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.866249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.874234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.874262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.882251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.882280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.890244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.890275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.898243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.898271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.906257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.906284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.914237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.914264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.922238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.922266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.930238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.930262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.942266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.942316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.950258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.950283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.958259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.958284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.518 [2024-07-26 10:16:16.966284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.518 [2024-07-26 10:16:16.966331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:16.974282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:16.974310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:16.982288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:16.982333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:16.990297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:16.990325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:16.998302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:16.998330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.006311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.006338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.014317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.014341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.022539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.022586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.030453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.030481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 Running I/O for 5 seconds... 00:11:03.776 [2024-07-26 10:16:17.038505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.038531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.052886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.052936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.063647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.063693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.077058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.077106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.086111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.086142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.097288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.097321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.109854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.109902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.119222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.119255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.136460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.136507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.152696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.152742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.161785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.161831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.176574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.176653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.185486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.185532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.201273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.201305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.776 [2024-07-26 10:16:17.210206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.776 [2024-07-26 10:16:17.210253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.777 [2024-07-26 10:16:17.224351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.777 [2024-07-26 10:16:17.224398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.233941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.233987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.244599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.244658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.256746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.256792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.266077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.266108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.279257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.279291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.294723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.294769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.312257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.312289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.326576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.326649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.335352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.335398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.346694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.346741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.358017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.358063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.366799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.366845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.379279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.379326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.389127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.389173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.399483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.399514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.412770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.412817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.422771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.422819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.433650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.433696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.446025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.446071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.464076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.464124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.479615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.479673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.035 [2024-07-26 10:16:17.488947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.035 [2024-07-26 10:16:17.488994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.501193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.501240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.510913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.510975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.521493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.521539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.533362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.533409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.542454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.542500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.556629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.556686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.565598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.565655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.577877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.577924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.587182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.587228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.601912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.601976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.611297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.611343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.627156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.627203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.636722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.636768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.650045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.650110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.660734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.660782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.671439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.671502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.682225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.682258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.699736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.699782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.716087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.716135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.725541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.725614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.736414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.736460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.294 [2024-07-26 10:16:17.748079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.294 [2024-07-26 10:16:17.748111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.757019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.757065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.769418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.769465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.779346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.779391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.793746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.793791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.802221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.802269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.812150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.812182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.822111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.822158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.832256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.832318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.842706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.842753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.853543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.853616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.864200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.864233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.876631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.876663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.886209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.886243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.899427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.899460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.909441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.909475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.919965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.919999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.932229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.932263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.941357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.941390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.957213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.957245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.966551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.966614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.980006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.980038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.553 [2024-07-26 10:16:17.994007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.553 [2024-07-26 10:16:17.994055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.009803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.009851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.018848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.018881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.030404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.030467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.041050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.041082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.051320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.051369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.061834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.061867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.074658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.074719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.091190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.091227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.107846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.107893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.117701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.117732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.128442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.128488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.146092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.146147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.155438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.155484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.169602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.169659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.178703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.178749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.195364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.195429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.214167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.214230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.224259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.224326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.235080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.235121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.247344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.247385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.256981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.257030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.273016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.273076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.290904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.290966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.848 [2024-07-26 10:16:18.301863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.848 [2024-07-26 10:16:18.301910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.314548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.314620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.324017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.324049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.340144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.340175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.351264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.351310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.367297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.367345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.383785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.383832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.401756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.401802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.411433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.411477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.421373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.421420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.431614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.431659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.441429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.441474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.451121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.451152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.461272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.461304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.471478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.471524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.481234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.481281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.496491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.496537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.505248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.505294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.517748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.517794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.528833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.528880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.537431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.537477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.552584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.552663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.110 [2024-07-26 10:16:18.563720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.110 [2024-07-26 10:16:18.563767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.579710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.579761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.599086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.599134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.609823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.609871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.622193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.622241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.631895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.631949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.644863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.644912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.655112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.655144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.669115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.669149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.678320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.678353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.693415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.693493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.703079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.703112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.718160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.718191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.727354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.727402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.738701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.738749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.751475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.751523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.761020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.761052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.771964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.771998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.783867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.783923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.792362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.792393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.803704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.803735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.368 [2024-07-26 10:16:18.814186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.368 [2024-07-26 10:16:18.814219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.626 [2024-07-26 10:16:18.824426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.626 [2024-07-26 10:16:18.824475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.626 [2024-07-26 10:16:18.835133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.626 [2024-07-26 10:16:18.835166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.626 [2024-07-26 10:16:18.845769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.626 [2024-07-26 10:16:18.845815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.626 [2024-07-26 10:16:18.857713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.626 [2024-07-26 10:16:18.857759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.874448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.874497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.889903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.889952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.899027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.899075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.915532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.915578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.925345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.925392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.939427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.939473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.949002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.949048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.960395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.960426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.971238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.971286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.982016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.982049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:18.999653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:18.999685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.015305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.015353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.023993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.024025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.036909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.036942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.046883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.046917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.061241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.061274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.071084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.071116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.627 [2024-07-26 10:16:19.081460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.627 [2024-07-26 10:16:19.081493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.091860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.091894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.102696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.102729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.116757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.116789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.125911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.125942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.141026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.141060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.150682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.150714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.164674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.164706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.179745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.179778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.191431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.191481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.208525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.208574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.224685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.224732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.242909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.242941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.257167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.257216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.273074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.273107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.289669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.289718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.306435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.306484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.322801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.322832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.886 [2024-07-26 10:16:19.339904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.886 [2024-07-26 10:16:19.339952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.355774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.355806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.373837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.373886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.384208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.384242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.398626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.398660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.409981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.410015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.419042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.419089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.430352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.430415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.440693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.440725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.451173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.144 [2024-07-26 10:16:19.451205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.144 [2024-07-26 10:16:19.463365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.463400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.473032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.473065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.487931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.487973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.504465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.504514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.514092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.514124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.528885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.528933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.538318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.538351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.549706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.549738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.561743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.561775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.570888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.570953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.145 [2024-07-26 10:16:19.584397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.145 [2024-07-26 10:16:19.584459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.601167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.601200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.611554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.611629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.622472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.622520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.634755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.634787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.653446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.653494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.667549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.667594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.677406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.677453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.688693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.688740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.699194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.699227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.403 [2024-07-26 10:16:19.710510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.403 [2024-07-26 10:16:19.710544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.721470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.721517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.731771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.731804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.742294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.742326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.753267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.753314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.768152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.768185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.785577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.785636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.795878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.795933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.806564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.806637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.818655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.818703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.828451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.828496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.840673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.840719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.404 [2024-07-26 10:16:19.851902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.404 [2024-07-26 10:16:19.851959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.868870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.868917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.878295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.878341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.892827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.892857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.910588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.910618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.920622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.920678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.934942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.934989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.944033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.944065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.956625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.956681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.972232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.972294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.983302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.983348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:19.999299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:19.999346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.015174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.015222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.024262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.024301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.036808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.036857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.046401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.046448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.056536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.056583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.066869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.066898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.077296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.077326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.087895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.087935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.662 [2024-07-26 10:16:20.102351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.662 [2024-07-26 10:16:20.102399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.663 [2024-07-26 10:16:20.112726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.663 [2024-07-26 10:16:20.112773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.126559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.126633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.136024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.136056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.147115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.147146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.157578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.157652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.167978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.168009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.178213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.178260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.188289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.188334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.198247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.198294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.208962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.209008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.219966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.219998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.234761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.234807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.243799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.243845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.256831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.256864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.266657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.266688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.277076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.277108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.921 [2024-07-26 10:16:20.289174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.921 [2024-07-26 10:16:20.289204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.300675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.300722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.309487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.309533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.322083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.322114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.332275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.332324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.347412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.347459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.357137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.357170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.922 [2024-07-26 10:16:20.371679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.922 [2024-07-26 10:16:20.371725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.390139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.390186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.400148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.400180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.413884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.413932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.423356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.423401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.434305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.434351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.444443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.444490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.454635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.454680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.464955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.465001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.474850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.474897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.485210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.485256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.495331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.495392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.505414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.505461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.515368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.515398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.529839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.529885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.538420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.538466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.551902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.551958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.561798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.561859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.572439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.572485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.582857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.582904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.593212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.593258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.605339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.605385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.614235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.614281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.181 [2024-07-26 10:16:20.626805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.181 [2024-07-26 10:16:20.626851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.636747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.636793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.647181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.647228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.658809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.658855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.668134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.668166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.679168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.679202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.689861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.689907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.700207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.700240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.711268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.711302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.723960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.723993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.733430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.733492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.749561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.749637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.759604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.759662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.774549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.774623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.790493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.790539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.799764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.799796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.813037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.813084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.823374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.823421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.834097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.834145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.845857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.845904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.854993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.855057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.866350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.866397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.876879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.876928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.440 [2024-07-26 10:16:20.887802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.440 [2024-07-26 10:16:20.887850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.900548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.900608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.910402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.910449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.925726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.925774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.935204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.935252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.946448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.946509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.958460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.958508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.967943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.967985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.980559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.980633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:20.990825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:20.990872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.004806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.004839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.021660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.021708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.031107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.031139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.045587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.045645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.055209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.055256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.070163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.070210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.079798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.079830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.096291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.096323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.106709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.106756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.117953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.118000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.128235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.128297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.138396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.138442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.700 [2024-07-26 10:16:21.149236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.700 [2024-07-26 10:16:21.149284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.959 [2024-07-26 10:16:21.166567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.166628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.176354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.176402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.191477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.191524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.201768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.201800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.212322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.212370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.224214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.224262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.233488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.233534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.245301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.245349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.256117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.256150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.268059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.268093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.277101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.277132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.290087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.290135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.305779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.305827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.323327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.323404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.338941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.339005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.355858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.355906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.365740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.365772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.376821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.376855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.387398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.387431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.398527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.398574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.960 [2024-07-26 10:16:21.411094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.960 [2024-07-26 10:16:21.411142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.422519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.422567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.431806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.431854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.443454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.443502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.453801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.453849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.464384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.464431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.475014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.475063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.485804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.485837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.496434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.496481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.508825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.508858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.527376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.527426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.542240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.542288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.553650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.553697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.561956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.562003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.574418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.574464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.584184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.584219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.594213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.594245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.604732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.604778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.617104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.617151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.627011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.627056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.642095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.642143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.651993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.652027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.219 [2024-07-26 10:16:21.667850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.219 [2024-07-26 10:16:21.667883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.478 [2024-07-26 10:16:21.684701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.478 [2024-07-26 10:16:21.684749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.478 [2024-07-26 10:16:21.700856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.700906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.718362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.718443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.733361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.733411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.742713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.742771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.758741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.758774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.777184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.777242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.791598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.791658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.807153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.807200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.825385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.825454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.839840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.839889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.856080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.856116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.872635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.872699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.888695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.888742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.905715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.905761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.479 [2024-07-26 10:16:21.922214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.479 [2024-07-26 10:16:21.922268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:21.938487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:21.938534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:21.959899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:21.959966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:21.969887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:21.969951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:21.980514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:21.980560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:21.993062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:21.993094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.002757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.002812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.017241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.017290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.027251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.027298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.037703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.037756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.044978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.045021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 00:11:08.738 Latency(us) 00:11:08.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.738 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:08.738 Nvme1n1 : 5.01 12120.09 94.69 0.00 0.00 10548.20 4259.84 23235.49 00:11:08.738 =================================================================================================================== 00:11:08.738 Total : 12120.09 94.69 0.00 0.00 10548.20 4259.84 23235.49 00:11:08.738 [2024-07-26 10:16:22.051961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.051992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.059903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.059944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.067920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.067953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.075935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.075972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.083942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.083976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.095964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.096002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.103940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.103972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.111947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.111982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.123961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.124013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.131954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.131992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.143970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.144009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.151989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.152025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.159969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.160004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.167970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.168004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.175955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.175985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.183963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.183992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.738 [2024-07-26 10:16:22.191989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.738 [2024-07-26 10:16:22.192024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.200007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.200045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.207987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.208026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.215994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.216029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.223998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.224038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.231991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.232025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.239978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.240006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.247975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.248001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 [2024-07-26 10:16:22.255976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.998 [2024-07-26 10:16:22.255998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.998 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74480) - No such process 00:11:08.998 10:16:22 -- target/zcopy.sh@49 -- # wait 74480 00:11:08.998 10:16:22 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.998 10:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.998 10:16:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.998 10:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.998 10:16:22 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:08.998 10:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.998 10:16:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.998 delay0 00:11:08.998 10:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.998 10:16:22 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:08.998 10:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.998 10:16:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.998 10:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.998 10:16:22 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:08.998 [2024-07-26 10:16:22.438026] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:15.577 Initializing NVMe Controllers 00:11:15.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:15.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:15.577 Initialization complete. Launching workers. 00:11:15.578 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 99 00:11:15.578 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 33 00:11:15.578 success 257, unsuccess 129, failed 0 00:11:15.578 10:16:28 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:15.578 10:16:28 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:15.578 10:16:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.578 10:16:28 -- nvmf/common.sh@116 -- # sync 00:11:15.578 10:16:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.578 10:16:28 -- nvmf/common.sh@119 -- # set +e 00:11:15.578 10:16:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.578 10:16:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.578 rmmod nvme_tcp 00:11:15.578 rmmod nvme_fabrics 00:11:15.578 rmmod nvme_keyring 00:11:15.578 10:16:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:15.578 10:16:28 -- nvmf/common.sh@123 -- # set -e 00:11:15.578 10:16:28 -- nvmf/common.sh@124 -- # return 0 00:11:15.578 10:16:28 -- nvmf/common.sh@477 -- # '[' -n 74335 ']' 00:11:15.578 10:16:28 -- nvmf/common.sh@478 -- # killprocess 74335 00:11:15.578 10:16:28 -- common/autotest_common.sh@926 -- # '[' -z 74335 ']' 00:11:15.578 10:16:28 -- common/autotest_common.sh@930 -- # kill -0 74335 00:11:15.578 10:16:28 -- common/autotest_common.sh@931 -- # uname 00:11:15.578 10:16:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:15.578 10:16:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74335 00:11:15.578 10:16:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:15.578 10:16:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:15.578 killing process with pid 74335 00:11:15.578 10:16:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74335' 00:11:15.578 10:16:28 -- common/autotest_common.sh@945 -- # kill 74335 00:11:15.578 10:16:28 -- common/autotest_common.sh@950 -- # wait 74335 00:11:15.578 10:16:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:15.578 10:16:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:15.578 10:16:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:15.578 10:16:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.578 10:16:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:15.578 10:16:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.578 10:16:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.578 10:16:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.578 10:16:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:15.578 00:11:15.578 real 0m24.509s 00:11:15.578 user 0m40.292s 00:11:15.578 sys 0m6.695s 00:11:15.578 10:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.578 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:11:15.578 ************************************ 00:11:15.578 END TEST nvmf_zcopy 00:11:15.578 ************************************ 00:11:15.578 10:16:28 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:15.578 10:16:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:15.578 10:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.578 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:11:15.578 ************************************ 00:11:15.578 START TEST nvmf_nmic 00:11:15.578 ************************************ 00:11:15.578 10:16:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:15.578 * Looking for test storage... 00:11:15.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.578 10:16:29 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:15.578 10:16:29 -- nvmf/common.sh@7 -- # uname -s 00:11:15.578 10:16:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.578 10:16:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.578 10:16:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.578 10:16:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.578 10:16:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.578 10:16:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.578 10:16:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.578 10:16:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.578 10:16:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.578 10:16:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.846 10:16:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:11:15.846 10:16:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:11:15.846 10:16:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.846 10:16:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.846 10:16:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:15.846 10:16:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.846 10:16:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.846 10:16:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.846 10:16:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.846 10:16:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.846 10:16:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.846 10:16:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.846 10:16:29 -- paths/export.sh@5 -- # export PATH 00:11:15.846 10:16:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.846 10:16:29 -- nvmf/common.sh@46 -- # : 0 00:11:15.846 10:16:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:15.846 10:16:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:15.846 10:16:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:15.846 10:16:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.846 10:16:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.846 10:16:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:15.846 10:16:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:15.846 10:16:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:15.846 10:16:29 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.846 10:16:29 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.847 10:16:29 -- target/nmic.sh@14 -- # nvmftestinit 00:11:15.847 10:16:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:15.847 10:16:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.847 10:16:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:15.847 10:16:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:15.847 10:16:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:15.847 10:16:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.847 10:16:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.847 10:16:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.847 10:16:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:15.847 10:16:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:15.847 10:16:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:15.847 10:16:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:15.847 10:16:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:15.847 10:16:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:15.847 10:16:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.847 10:16:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.847 10:16:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:15.847 10:16:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:15.847 10:16:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:15.847 10:16:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:15.847 10:16:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:15.847 10:16:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.847 10:16:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:15.847 10:16:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:15.847 10:16:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:15.847 10:16:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:15.847 10:16:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:15.847 10:16:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:15.847 Cannot find device "nvmf_tgt_br" 00:11:15.847 10:16:29 -- nvmf/common.sh@154 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.847 Cannot find device "nvmf_tgt_br2" 00:11:15.847 10:16:29 -- nvmf/common.sh@155 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:15.847 10:16:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:15.847 Cannot find device "nvmf_tgt_br" 00:11:15.847 10:16:29 -- nvmf/common.sh@157 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:15.847 Cannot find device "nvmf_tgt_br2" 00:11:15.847 10:16:29 -- nvmf/common.sh@158 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:15.847 10:16:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:15.847 10:16:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.847 10:16:29 -- nvmf/common.sh@161 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.847 10:16:29 -- nvmf/common.sh@162 -- # true 00:11:15.847 10:16:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.847 10:16:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.847 10:16:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.847 10:16:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.847 10:16:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.847 10:16:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.847 10:16:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.847 10:16:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.847 10:16:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.847 10:16:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:15.847 10:16:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:15.847 10:16:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:15.847 10:16:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:15.847 10:16:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.847 10:16:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.847 10:16:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.847 10:16:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:15.847 10:16:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:15.847 10:16:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.106 10:16:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.106 10:16:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.106 10:16:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.106 10:16:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.106 10:16:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:16.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:16.106 00:11:16.106 --- 10.0.0.2 ping statistics --- 00:11:16.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.106 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:16.106 10:16:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:16.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:16.106 00:11:16.106 --- 10.0.0.3 ping statistics --- 00:11:16.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.106 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:16.106 10:16:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:16.106 00:11:16.106 --- 10.0.0.1 ping statistics --- 00:11:16.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.106 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:16.106 10:16:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.106 10:16:29 -- nvmf/common.sh@421 -- # return 0 00:11:16.106 10:16:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.106 10:16:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.106 10:16:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.106 10:16:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.106 10:16:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.106 10:16:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.106 10:16:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.106 10:16:29 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:16.106 10:16:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.106 10:16:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:16.106 10:16:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.106 10:16:29 -- nvmf/common.sh@469 -- # nvmfpid=74807 00:11:16.106 10:16:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.106 10:16:29 -- nvmf/common.sh@470 -- # waitforlisten 74807 00:11:16.106 10:16:29 -- common/autotest_common.sh@819 -- # '[' -z 74807 ']' 00:11:16.106 10:16:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.106 10:16:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:16.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.106 10:16:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.106 10:16:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:16.106 10:16:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.106 [2024-07-26 10:16:29.430009] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:16.106 [2024-07-26 10:16:29.430120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.365 [2024-07-26 10:16:29.572113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.365 [2024-07-26 10:16:29.666773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.365 [2024-07-26 10:16:29.666946] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.365 [2024-07-26 10:16:29.666968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.365 [2024-07-26 10:16:29.666982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.365 [2024-07-26 10:16:29.667136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.365 [2024-07-26 10:16:29.667898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.365 [2024-07-26 10:16:29.668032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.365 [2024-07-26 10:16:29.668050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.301 10:16:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:17.301 10:16:30 -- common/autotest_common.sh@852 -- # return 0 00:11:17.301 10:16:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.301 10:16:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:17.301 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 10:16:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.302 10:16:30 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 [2024-07-26 10:16:30.451989] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 Malloc0 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 [2024-07-26 10:16:30.518758] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 test case1: single bdev can't be used in multiple subsystems 00:11:17.302 10:16:30 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:17.302 10:16:30 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@28 -- # nmic_status=0 00:11:17.302 10:16:30 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 [2024-07-26 10:16:30.546623] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:17.302 [2024-07-26 10:16:30.546665] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:17.302 [2024-07-26 10:16:30.546684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.302 request: 00:11:17.302 { 00:11:17.302 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:17.302 "namespace": { 00:11:17.302 "bdev_name": "Malloc0" 00:11:17.302 }, 00:11:17.302 "method": "nvmf_subsystem_add_ns", 00:11:17.302 "req_id": 1 00:11:17.302 } 00:11:17.302 Got JSON-RPC error response 00:11:17.302 response: 00:11:17.302 { 00:11:17.302 "code": -32602, 00:11:17.302 "message": "Invalid parameters" 00:11:17.302 } 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@29 -- # nmic_status=1 00:11:17.302 10:16:30 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:17.302 Adding namespace failed - expected result. 00:11:17.302 10:16:30 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:17.302 test case2: host connect to nvmf target in multiple paths 00:11:17.302 10:16:30 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:17.302 10:16:30 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:17.302 10:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.302 10:16:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 [2024-07-26 10:16:30.554734] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:17.302 10:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.302 10:16:30 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.302 10:16:30 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:17.561 10:16:30 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.561 10:16:30 -- common/autotest_common.sh@1177 -- # local i=0 00:11:17.561 10:16:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.561 10:16:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:17.561 10:16:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:19.461 10:16:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:19.461 10:16:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.461 10:16:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:19.461 10:16:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:19.461 10:16:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.461 10:16:32 -- common/autotest_common.sh@1187 -- # return 0 00:11:19.461 10:16:32 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:19.461 [global] 00:11:19.461 thread=1 00:11:19.461 invalidate=1 00:11:19.461 rw=write 00:11:19.461 time_based=1 00:11:19.461 runtime=1 00:11:19.461 ioengine=libaio 00:11:19.461 direct=1 00:11:19.461 bs=4096 00:11:19.461 iodepth=1 00:11:19.461 norandommap=0 00:11:19.461 numjobs=1 00:11:19.461 00:11:19.461 verify_dump=1 00:11:19.461 verify_backlog=512 00:11:19.461 verify_state_save=0 00:11:19.461 do_verify=1 00:11:19.461 verify=crc32c-intel 00:11:19.461 [job0] 00:11:19.461 filename=/dev/nvme0n1 00:11:19.461 Could not set queue depth (nvme0n1) 00:11:19.719 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.719 fio-3.35 00:11:19.719 Starting 1 thread 00:11:20.671 00:11:20.671 job0: (groupid=0, jobs=1): err= 0: pid=74893: Fri Jul 26 10:16:34 2024 00:11:20.671 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec) 00:11:20.671 slat (nsec): min=12202, max=37347, avg=14334.04, stdev=2021.70 00:11:20.671 clat (usec): min=138, max=589, avg=172.99, stdev=16.00 00:11:20.671 lat (usec): min=153, max=605, avg=187.33, stdev=16.09 00:11:20.671 clat percentiles (usec): 00:11:20.671 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:11:20.671 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:20.671 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 196], 00:11:20.671 | 99.00th=[ 204], 99.50th=[ 206], 99.90th=[ 221], 99.95th=[ 260], 00:11:20.671 | 99.99th=[ 586] 00:11:20.671 write: IOPS=3271, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1000msec); 0 zone resets 00:11:20.671 slat (usec): min=17, max=113, avg=20.97, stdev= 4.39 00:11:20.671 clat (usec): min=85, max=237, avg=105.57, stdev=10.34 00:11:20.671 lat (usec): min=104, max=350, avg=126.55, stdev=12.25 00:11:20.671 clat percentiles (usec): 00:11:20.671 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 97], 00:11:20.671 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:11:20.671 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 123], 00:11:20.671 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 159], 99.95th=[ 194], 00:11:20.671 | 99.99th=[ 237] 00:11:20.671 bw ( KiB/s): min=12704, max=12704, per=97.10%, avg=12704.00, stdev= 0.00, samples=1 00:11:20.671 iops : min= 3176, max= 3176, avg=3176.00, stdev= 0.00, samples=1 00:11:20.671 lat (usec) : 100=15.10%, 250=84.87%, 500=0.02%, 750=0.02% 00:11:20.671 cpu : usr=2.20%, sys=8.70%, ctx=6343, majf=0, minf=2 00:11:20.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.671 issued rwts: total=3072,3271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.671 00:11:20.671 Run status group 0 (all jobs): 00:11:20.671 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:11:20.671 WRITE: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1000-1000msec 00:11:20.671 00:11:20.671 Disk stats (read/write): 00:11:20.671 nvme0n1: ios=2715/3072, merge=0/0, ticks=496/349, in_queue=845, util=91.38% 00:11:20.671 10:16:34 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:20.930 10:16:34 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.930 10:16:34 -- common/autotest_common.sh@1198 -- # local i=0 00:11:20.930 10:16:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:20.930 10:16:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.930 10:16:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:20.930 10:16:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.930 10:16:34 -- common/autotest_common.sh@1210 -- # return 0 00:11:20.930 10:16:34 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:20.930 10:16:34 -- target/nmic.sh@53 -- # nvmftestfini 00:11:20.930 10:16:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:20.930 10:16:34 -- nvmf/common.sh@116 -- # sync 00:11:20.930 10:16:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:20.930 10:16:34 -- nvmf/common.sh@119 -- # set +e 00:11:20.930 10:16:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:20.930 10:16:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:20.930 rmmod nvme_tcp 00:11:20.930 rmmod nvme_fabrics 00:11:20.930 rmmod nvme_keyring 00:11:20.930 10:16:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:20.930 10:16:34 -- nvmf/common.sh@123 -- # set -e 00:11:20.930 10:16:34 -- nvmf/common.sh@124 -- # return 0 00:11:20.930 10:16:34 -- nvmf/common.sh@477 -- # '[' -n 74807 ']' 00:11:20.930 10:16:34 -- nvmf/common.sh@478 -- # killprocess 74807 00:11:20.930 10:16:34 -- common/autotest_common.sh@926 -- # '[' -z 74807 ']' 00:11:20.930 10:16:34 -- common/autotest_common.sh@930 -- # kill -0 74807 00:11:20.930 10:16:34 -- common/autotest_common.sh@931 -- # uname 00:11:20.930 10:16:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:20.930 10:16:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74807 00:11:20.930 killing process with pid 74807 00:11:20.930 10:16:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:20.930 10:16:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:20.930 10:16:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74807' 00:11:20.930 10:16:34 -- common/autotest_common.sh@945 -- # kill 74807 00:11:20.930 10:16:34 -- common/autotest_common.sh@950 -- # wait 74807 00:11:21.188 10:16:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:21.188 10:16:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:21.188 10:16:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:21.188 10:16:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.188 10:16:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:21.188 10:16:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.188 10:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.188 10:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.188 10:16:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:21.188 00:11:21.188 real 0m5.631s 00:11:21.188 user 0m18.274s 00:11:21.188 sys 0m2.136s 00:11:21.188 10:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.188 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:21.188 ************************************ 00:11:21.188 END TEST nvmf_nmic 00:11:21.188 ************************************ 00:11:21.188 10:16:34 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:21.188 10:16:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:21.188 10:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:21.188 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:21.188 ************************************ 00:11:21.188 START TEST nvmf_fio_target 00:11:21.188 ************************************ 00:11:21.188 10:16:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:21.448 * Looking for test storage... 00:11:21.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.448 10:16:34 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.448 10:16:34 -- nvmf/common.sh@7 -- # uname -s 00:11:21.448 10:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.448 10:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.448 10:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.448 10:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.448 10:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.448 10:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.448 10:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.448 10:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.448 10:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.448 10:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.448 10:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:11:21.448 10:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:11:21.448 10:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.448 10:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.448 10:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.448 10:16:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.448 10:16:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.448 10:16:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.448 10:16:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.449 10:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.449 10:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.449 10:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.449 10:16:34 -- paths/export.sh@5 -- # export PATH 00:11:21.449 10:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.449 10:16:34 -- nvmf/common.sh@46 -- # : 0 00:11:21.449 10:16:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:21.449 10:16:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:21.449 10:16:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:21.449 10:16:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.449 10:16:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.449 10:16:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:21.449 10:16:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:21.449 10:16:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:21.449 10:16:34 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.449 10:16:34 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.449 10:16:34 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.449 10:16:34 -- target/fio.sh@16 -- # nvmftestinit 00:11:21.449 10:16:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:21.449 10:16:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.449 10:16:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:21.449 10:16:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:21.449 10:16:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:21.449 10:16:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.449 10:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.449 10:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.449 10:16:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:21.449 10:16:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:21.449 10:16:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:21.449 10:16:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:21.449 10:16:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:21.449 10:16:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:21.449 10:16:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.449 10:16:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.449 10:16:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.449 10:16:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:21.449 10:16:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.449 10:16:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.449 10:16:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.449 10:16:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.449 10:16:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.449 10:16:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.449 10:16:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.449 10:16:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.449 10:16:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:21.449 10:16:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:21.449 Cannot find device "nvmf_tgt_br" 00:11:21.449 10:16:34 -- nvmf/common.sh@154 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.449 Cannot find device "nvmf_tgt_br2" 00:11:21.449 10:16:34 -- nvmf/common.sh@155 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:21.449 10:16:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:21.449 Cannot find device "nvmf_tgt_br" 00:11:21.449 10:16:34 -- nvmf/common.sh@157 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:21.449 Cannot find device "nvmf_tgt_br2" 00:11:21.449 10:16:34 -- nvmf/common.sh@158 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:21.449 10:16:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:21.449 10:16:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.449 10:16:34 -- nvmf/common.sh@161 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.449 10:16:34 -- nvmf/common.sh@162 -- # true 00:11:21.449 10:16:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.449 10:16:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.449 10:16:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.449 10:16:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.449 10:16:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.708 10:16:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.708 10:16:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.708 10:16:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:21.708 10:16:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:21.708 10:16:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:21.708 10:16:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:21.708 10:16:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:21.708 10:16:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:21.708 10:16:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.708 10:16:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.708 10:16:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.708 10:16:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:21.708 10:16:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:21.708 10:16:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.708 10:16:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.708 10:16:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.708 10:16:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.708 10:16:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.708 10:16:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:21.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:21.708 00:11:21.708 --- 10.0.0.2 ping statistics --- 00:11:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.708 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:21.708 10:16:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:21.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:21.708 00:11:21.708 --- 10.0.0.3 ping statistics --- 00:11:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.708 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:21.708 10:16:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:21.708 00:11:21.708 --- 10.0.0.1 ping statistics --- 00:11:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.708 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:21.708 10:16:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.708 10:16:35 -- nvmf/common.sh@421 -- # return 0 00:11:21.708 10:16:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:21.708 10:16:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.708 10:16:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:21.708 10:16:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:21.708 10:16:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.708 10:16:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:21.708 10:16:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:21.708 10:16:35 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:21.708 10:16:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:21.708 10:16:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:21.708 10:16:35 -- common/autotest_common.sh@10 -- # set +x 00:11:21.708 10:16:35 -- nvmf/common.sh@469 -- # nvmfpid=75069 00:11:21.708 10:16:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.708 10:16:35 -- nvmf/common.sh@470 -- # waitforlisten 75069 00:11:21.708 10:16:35 -- common/autotest_common.sh@819 -- # '[' -z 75069 ']' 00:11:21.708 10:16:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.708 10:16:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:21.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.708 10:16:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.708 10:16:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:21.708 10:16:35 -- common/autotest_common.sh@10 -- # set +x 00:11:21.708 [2024-07-26 10:16:35.123692] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:21.709 [2024-07-26 10:16:35.123777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.967 [2024-07-26 10:16:35.258186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.967 [2024-07-26 10:16:35.341360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:21.967 [2024-07-26 10:16:35.341497] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.967 [2024-07-26 10:16:35.341510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.967 [2024-07-26 10:16:35.341519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.967 [2024-07-26 10:16:35.341676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.967 [2024-07-26 10:16:35.341742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.967 [2024-07-26 10:16:35.341853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.967 [2024-07-26 10:16:35.341859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.902 10:16:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:22.902 10:16:36 -- common/autotest_common.sh@852 -- # return 0 00:11:22.902 10:16:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:22.902 10:16:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:22.902 10:16:36 -- common/autotest_common.sh@10 -- # set +x 00:11:22.902 10:16:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.902 10:16:36 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:23.160 [2024-07-26 10:16:36.441990] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.160 10:16:36 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.419 10:16:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:23.419 10:16:36 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.677 10:16:37 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:23.677 10:16:37 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.935 10:16:37 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:23.935 10:16:37 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.193 10:16:37 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:24.193 10:16:37 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:24.453 10:16:37 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.714 10:16:38 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:24.714 10:16:38 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.972 10:16:38 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:24.972 10:16:38 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.231 10:16:38 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:25.231 10:16:38 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:25.489 10:16:38 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.489 10:16:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:25.489 10:16:38 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.080 10:16:39 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:26.080 10:16:39 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.080 10:16:39 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.337 [2024-07-26 10:16:39.681728] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.337 10:16:39 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:26.596 10:16:39 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:26.854 10:16:40 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.854 10:16:40 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:26.854 10:16:40 -- common/autotest_common.sh@1177 -- # local i=0 00:11:26.854 10:16:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.854 10:16:40 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:11:26.854 10:16:40 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:11:26.854 10:16:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:29.385 10:16:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:29.385 10:16:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:29.385 10:16:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.385 10:16:42 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:11:29.385 10:16:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.385 10:16:42 -- common/autotest_common.sh@1187 -- # return 0 00:11:29.385 10:16:42 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:29.385 [global] 00:11:29.385 thread=1 00:11:29.385 invalidate=1 00:11:29.385 rw=write 00:11:29.385 time_based=1 00:11:29.385 runtime=1 00:11:29.385 ioengine=libaio 00:11:29.385 direct=1 00:11:29.385 bs=4096 00:11:29.385 iodepth=1 00:11:29.385 norandommap=0 00:11:29.385 numjobs=1 00:11:29.385 00:11:29.385 verify_dump=1 00:11:29.385 verify_backlog=512 00:11:29.385 verify_state_save=0 00:11:29.385 do_verify=1 00:11:29.385 verify=crc32c-intel 00:11:29.385 [job0] 00:11:29.385 filename=/dev/nvme0n1 00:11:29.385 [job1] 00:11:29.385 filename=/dev/nvme0n2 00:11:29.385 [job2] 00:11:29.385 filename=/dev/nvme0n3 00:11:29.385 [job3] 00:11:29.385 filename=/dev/nvme0n4 00:11:29.385 Could not set queue depth (nvme0n1) 00:11:29.385 Could not set queue depth (nvme0n2) 00:11:29.385 Could not set queue depth (nvme0n3) 00:11:29.385 Could not set queue depth (nvme0n4) 00:11:29.385 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.385 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.385 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.386 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.386 fio-3.35 00:11:29.386 Starting 4 threads 00:11:30.337 00:11:30.337 job0: (groupid=0, jobs=1): err= 0: pid=75259: Fri Jul 26 10:16:43 2024 00:11:30.337 read: IOPS=1796, BW=7185KiB/s (7357kB/s)(7192KiB/1001msec) 00:11:30.337 slat (nsec): min=14556, max=58577, avg=19678.34, stdev=5930.23 00:11:30.337 clat (usec): min=168, max=630, avg=301.15, stdev=83.84 00:11:30.337 lat (usec): min=187, max=652, avg=320.83, stdev=87.32 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 198], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 239], 00:11:30.337 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 273], 00:11:30.337 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 449], 95.00th=[ 465], 00:11:30.337 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 627], 00:11:30.337 | 99.99th=[ 627] 00:11:30.337 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:30.337 slat (usec): min=20, max=578, avg=29.34, stdev=18.67 00:11:30.337 clat (usec): min=95, max=3483, avg=172.77, stdev=98.38 00:11:30.337 lat (usec): min=117, max=3511, avg=202.11, stdev=104.89 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 119], 00:11:30.337 | 30.00th=[ 125], 40.00th=[ 133], 50.00th=[ 155], 60.00th=[ 167], 00:11:30.337 | 70.00th=[ 184], 80.00th=[ 210], 90.00th=[ 297], 95.00th=[ 318], 00:11:30.337 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 611], 99.95th=[ 644], 00:11:30.337 | 99.99th=[ 3490] 00:11:30.337 bw ( KiB/s): min= 8192, max= 8192, per=20.35%, avg=8192.00, stdev= 0.00, samples=1 00:11:30.337 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:30.337 lat (usec) : 100=0.16%, 250=64.01%, 500=35.36%, 750=0.44% 00:11:30.337 lat (msec) : 4=0.03% 00:11:30.337 cpu : usr=1.90%, sys=7.50%, ctx=3846, majf=0, minf=12 00:11:30.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 issued rwts: total=1798,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.337 job1: (groupid=0, jobs=1): err= 0: pid=75260: Fri Jul 26 10:16:43 2024 00:11:30.337 read: IOPS=2011, BW=8048KiB/s (8241kB/s)(8056KiB/1001msec) 00:11:30.337 slat (nsec): min=12041, max=53802, avg=17685.35, stdev=6323.42 00:11:30.337 clat (usec): min=151, max=2446, avg=296.55, stdev=103.87 00:11:30.337 lat (usec): min=164, max=2472, avg=314.24, stdev=107.52 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 172], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:11:30.337 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:11:30.337 | 70.00th=[ 281], 80.00th=[ 412], 90.00th=[ 457], 95.00th=[ 474], 00:11:30.337 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 889], 99.95th=[ 947], 00:11:30.337 | 99.99th=[ 2442] 00:11:30.337 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:30.337 slat (usec): min=14, max=104, avg=20.68, stdev= 3.74 00:11:30.337 clat (usec): min=92, max=382, avg=154.54, stdev=35.54 00:11:30.337 lat (usec): min=110, max=461, avg=175.21, stdev=36.16 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 99], 5.00th=[ 106], 10.00th=[ 113], 20.00th=[ 120], 00:11:30.337 | 30.00th=[ 127], 40.00th=[ 135], 50.00th=[ 149], 60.00th=[ 172], 00:11:30.337 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:11:30.337 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 251], 99.95th=[ 359], 00:11:30.337 | 99.99th=[ 383] 00:11:30.337 bw ( KiB/s): min= 8432, max= 8432, per=20.95%, avg=8432.00, stdev= 0.00, samples=1 00:11:30.337 iops : min= 2108, max= 2108, avg=2108.00, stdev= 0.00, samples=1 00:11:30.337 lat (usec) : 100=0.86%, 250=71.57%, 500=27.15%, 750=0.32%, 1000=0.07% 00:11:30.337 lat (msec) : 4=0.02% 00:11:30.337 cpu : usr=1.70%, sys=6.30%, ctx=4063, majf=0, minf=9 00:11:30.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 issued rwts: total=2014,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.337 job2: (groupid=0, jobs=1): err= 0: pid=75261: Fri Jul 26 10:16:43 2024 00:11:30.337 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:30.337 slat (nsec): min=12372, max=52856, avg=19786.08, stdev=5631.84 00:11:30.337 clat (usec): min=139, max=495, avg=175.73, stdev=14.12 00:11:30.337 lat (usec): min=153, max=509, avg=195.52, stdev=15.62 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:11:30.337 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:11:30.337 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:11:30.337 | 99.00th=[ 210], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 231], 00:11:30.337 | 99.99th=[ 494] 00:11:30.337 write: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:11:30.337 slat (nsec): min=14378, max=85916, avg=30702.90, stdev=10723.82 00:11:30.337 clat (usec): min=103, max=2039, avg=136.83, stdev=37.73 00:11:30.337 lat (usec): min=125, max=2064, avg=167.53, stdev=40.00 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:11:30.337 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:11:30.337 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:11:30.337 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 273], 99.95th=[ 347], 00:11:30.337 | 99.99th=[ 2040] 00:11:30.337 bw ( KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1 00:11:30.337 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:30.337 lat (usec) : 250=99.93%, 500=0.05% 00:11:30.337 lat (msec) : 4=0.02% 00:11:30.337 cpu : usr=4.20%, sys=9.60%, ctx=5467, majf=0, minf=3 00:11:30.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.337 issued rwts: total=2560,2906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.337 job3: (groupid=0, jobs=1): err= 0: pid=75262: Fri Jul 26 10:16:43 2024 00:11:30.337 read: IOPS=2735, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1000msec) 00:11:30.337 slat (nsec): min=11975, max=33505, avg=15183.33, stdev=2501.39 00:11:30.337 clat (usec): min=140, max=722, avg=172.55, stdev=19.11 00:11:30.337 lat (usec): min=153, max=736, avg=187.73, stdev=19.48 00:11:30.337 clat percentiles (usec): 00:11:30.337 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:30.337 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:11:30.337 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:11:30.337 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 404], 99.95th=[ 586], 00:11:30.338 | 99.99th=[ 725] 00:11:30.338 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:11:30.338 slat (nsec): min=13323, max=88370, avg=23488.24, stdev=6873.17 00:11:30.338 clat (usec): min=96, max=474, avg=131.50, stdev=13.86 00:11:30.338 lat (usec): min=114, max=493, avg=154.99, stdev=16.16 00:11:30.338 clat percentiles (usec): 00:11:30.338 | 1.00th=[ 104], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:11:30.338 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:11:30.338 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:11:30.338 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 249], 00:11:30.338 | 99.99th=[ 474] 00:11:30.338 bw ( KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1 00:11:30.338 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:30.338 lat (usec) : 100=0.19%, 250=99.72%, 500=0.05%, 750=0.03% 00:11:30.338 cpu : usr=1.70%, sys=9.60%, ctx=5815, majf=0, minf=11 00:11:30.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.338 issued rwts: total=2735,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.338 00:11:30.338 Run status group 0 (all jobs): 00:11:30.338 READ: bw=35.5MiB/s (37.3MB/s), 7185KiB/s-10.7MiB/s (7357kB/s-11.2MB/s), io=35.6MiB (37.3MB), run=1000-1001msec 00:11:30.338 WRITE: bw=39.3MiB/s (41.2MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.4MiB (41.3MB), run=1000-1001msec 00:11:30.338 00:11:30.338 Disk stats (read/write): 00:11:30.338 nvme0n1: ios=1586/1626, merge=0/0, ticks=490/312, in_queue=802, util=87.88% 00:11:30.338 nvme0n2: ios=1641/2048, merge=0/0, ticks=506/338, in_queue=844, util=88.16% 00:11:30.338 nvme0n3: ios=2127/2560, merge=0/0, ticks=377/376, in_queue=753, util=89.26% 00:11:30.338 nvme0n4: ios=2412/2560, merge=0/0, ticks=430/362, in_queue=792, util=89.81% 00:11:30.338 10:16:43 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:30.338 [global] 00:11:30.338 thread=1 00:11:30.338 invalidate=1 00:11:30.338 rw=randwrite 00:11:30.338 time_based=1 00:11:30.338 runtime=1 00:11:30.338 ioengine=libaio 00:11:30.338 direct=1 00:11:30.338 bs=4096 00:11:30.338 iodepth=1 00:11:30.338 norandommap=0 00:11:30.338 numjobs=1 00:11:30.338 00:11:30.338 verify_dump=1 00:11:30.338 verify_backlog=512 00:11:30.338 verify_state_save=0 00:11:30.338 do_verify=1 00:11:30.338 verify=crc32c-intel 00:11:30.338 [job0] 00:11:30.338 filename=/dev/nvme0n1 00:11:30.338 [job1] 00:11:30.338 filename=/dev/nvme0n2 00:11:30.338 [job2] 00:11:30.338 filename=/dev/nvme0n3 00:11:30.338 [job3] 00:11:30.338 filename=/dev/nvme0n4 00:11:30.338 Could not set queue depth (nvme0n1) 00:11:30.338 Could not set queue depth (nvme0n2) 00:11:30.338 Could not set queue depth (nvme0n3) 00:11:30.338 Could not set queue depth (nvme0n4) 00:11:30.596 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.596 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.596 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.596 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.596 fio-3.35 00:11:30.596 Starting 4 threads 00:11:31.972 00:11:31.972 job0: (groupid=0, jobs=1): err= 0: pid=75315: Fri Jul 26 10:16:45 2024 00:11:31.972 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:11:31.972 slat (nsec): min=10797, max=29109, avg=13373.40, stdev=1859.68 00:11:31.973 clat (usec): min=130, max=454, avg=161.49, stdev=12.26 00:11:31.973 lat (usec): min=142, max=468, avg=174.86, stdev=12.54 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:11:31.973 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:11:31.973 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:11:31.973 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 306], 99.95th=[ 347], 00:11:31.973 | 99.99th=[ 453] 00:11:31.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:31.973 slat (usec): min=13, max=144, avg=20.04, stdev= 4.30 00:11:31.973 clat (usec): min=97, max=391, avg=128.97, stdev=14.07 00:11:31.973 lat (usec): min=115, max=409, avg=149.02, stdev=15.11 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:11:31.973 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:11:31.973 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:11:31.973 | 99.00th=[ 165], 99.50th=[ 208], 99.90th=[ 262], 99.95th=[ 338], 00:11:31.973 | 99.99th=[ 392] 00:11:31.973 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:31.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:31.973 lat (usec) : 100=0.02%, 250=99.82%, 500=0.16% 00:11:31.973 cpu : usr=2.00%, sys=8.60%, ctx=6115, majf=0, minf=15 00:11:31.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 issued rwts: total=3042,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.973 job1: (groupid=0, jobs=1): err= 0: pid=75316: Fri Jul 26 10:16:45 2024 00:11:31.973 read: IOPS=3061, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec) 00:11:31.973 slat (nsec): min=12600, max=41061, avg=14701.59, stdev=1875.56 00:11:31.973 clat (usec): min=130, max=646, avg=160.87, stdev=15.35 00:11:31.973 lat (usec): min=144, max=660, avg=175.58, stdev=15.39 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:11:31.973 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:11:31.973 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:11:31.973 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 219], 99.95th=[ 553], 00:11:31.973 | 99.99th=[ 644] 00:11:31.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:31.973 slat (nsec): min=14180, max=84940, avg=20699.34, stdev=3887.44 00:11:31.973 clat (usec): min=95, max=803, avg=126.38, stdev=16.01 00:11:31.973 lat (usec): min=115, max=823, avg=147.08, stdev=16.36 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:11:31.973 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:11:31.973 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:11:31.973 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 338], 00:11:31.973 | 99.99th=[ 807] 00:11:31.973 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:31.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:31.973 lat (usec) : 100=0.08%, 250=99.84%, 500=0.03%, 750=0.03%, 1000=0.02% 00:11:31.973 cpu : usr=1.90%, sys=8.80%, ctx=6138, majf=0, minf=12 00:11:31.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.973 job2: (groupid=0, jobs=1): err= 0: pid=75317: Fri Jul 26 10:16:45 2024 00:11:31.973 read: IOPS=2824, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:11:31.973 slat (nsec): min=11015, max=40197, avg=13454.52, stdev=1967.13 00:11:31.973 clat (usec): min=136, max=1720, avg=168.45, stdev=35.96 00:11:31.973 lat (usec): min=149, max=1733, avg=181.90, stdev=36.06 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:11:31.973 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:11:31.973 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:11:31.973 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 644], 99.95th=[ 783], 00:11:31.973 | 99.99th=[ 1713] 00:11:31.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:31.973 slat (nsec): min=13781, max=82920, avg=20379.49, stdev=3973.54 00:11:31.973 clat (usec): min=107, max=241, avg=134.74, stdev=10.51 00:11:31.973 lat (usec): min=127, max=264, avg=155.12, stdev=11.16 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:11:31.973 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:31.973 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:11:31.973 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 200], 99.95th=[ 206], 00:11:31.973 | 99.99th=[ 241] 00:11:31.973 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:31.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:31.973 lat (usec) : 250=99.86%, 500=0.07%, 750=0.03%, 1000=0.02% 00:11:31.973 lat (msec) : 2=0.02% 00:11:31.973 cpu : usr=2.20%, sys=8.10%, ctx=5899, majf=0, minf=7 00:11:31.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 issued rwts: total=2827,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.973 job3: (groupid=0, jobs=1): err= 0: pid=75318: Fri Jul 26 10:16:45 2024 00:11:31.973 read: IOPS=2808, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:11:31.973 slat (nsec): min=11599, max=30764, avg=13604.40, stdev=1517.71 00:11:31.973 clat (usec): min=141, max=2352, avg=168.80, stdev=43.17 00:11:31.973 lat (usec): min=155, max=2373, avg=182.40, stdev=43.33 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:11:31.973 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:11:31.973 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 188], 00:11:31.973 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 437], 99.95th=[ 445], 00:11:31.973 | 99.99th=[ 2343] 00:11:31.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:31.973 slat (usec): min=17, max=162, avg=20.40, stdev= 4.48 00:11:31.973 clat (usec): min=104, max=186, avg=134.98, stdev=10.33 00:11:31.973 lat (usec): min=124, max=343, avg=155.38, stdev=11.72 00:11:31.973 clat percentiles (usec): 00:11:31.973 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:11:31.973 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:11:31.973 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:11:31.973 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 178], 99.95th=[ 184], 00:11:31.973 | 99.99th=[ 188] 00:11:31.973 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:31.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:31.973 lat (usec) : 250=99.95%, 500=0.03% 00:11:31.973 lat (msec) : 4=0.02% 00:11:31.973 cpu : usr=1.90%, sys=8.30%, ctx=5884, majf=0, minf=11 00:11:31.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.973 issued rwts: total=2811,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.973 00:11:31.973 Run status group 0 (all jobs): 00:11:31.973 READ: bw=45.8MiB/s (48.1MB/s), 11.0MiB/s-12.0MiB/s (11.5MB/s-12.5MB/s), io=45.9MiB (48.1MB), run=1001-1001msec 00:11:31.973 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:11:31.973 00:11:31.973 Disk stats (read/write): 00:11:31.973 nvme0n1: ios=2610/2764, merge=0/0, ticks=447/377, in_queue=824, util=88.78% 00:11:31.973 nvme0n2: ios=2609/2799, merge=0/0, ticks=463/390, in_queue=853, util=89.10% 00:11:31.973 nvme0n3: ios=2547/2560, merge=0/0, ticks=442/362, in_queue=804, util=89.23% 00:11:31.973 nvme0n4: ios=2533/2560, merge=0/0, ticks=436/369, in_queue=805, util=89.78% 00:11:31.973 10:16:45 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:31.973 [global] 00:11:31.973 thread=1 00:11:31.973 invalidate=1 00:11:31.973 rw=write 00:11:31.973 time_based=1 00:11:31.973 runtime=1 00:11:31.973 ioengine=libaio 00:11:31.973 direct=1 00:11:31.973 bs=4096 00:11:31.973 iodepth=128 00:11:31.973 norandommap=0 00:11:31.973 numjobs=1 00:11:31.973 00:11:31.973 verify_dump=1 00:11:31.973 verify_backlog=512 00:11:31.973 verify_state_save=0 00:11:31.973 do_verify=1 00:11:31.973 verify=crc32c-intel 00:11:31.973 [job0] 00:11:31.973 filename=/dev/nvme0n1 00:11:31.973 [job1] 00:11:31.973 filename=/dev/nvme0n2 00:11:31.973 [job2] 00:11:31.973 filename=/dev/nvme0n3 00:11:31.973 [job3] 00:11:31.973 filename=/dev/nvme0n4 00:11:31.973 Could not set queue depth (nvme0n1) 00:11:31.973 Could not set queue depth (nvme0n2) 00:11:31.973 Could not set queue depth (nvme0n3) 00:11:31.973 Could not set queue depth (nvme0n4) 00:11:31.973 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.973 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.973 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.973 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.973 fio-3.35 00:11:31.974 Starting 4 threads 00:11:33.350 00:11:33.350 job0: (groupid=0, jobs=1): err= 0: pid=75372: Fri Jul 26 10:16:46 2024 00:11:33.350 read: IOPS=2333, BW=9335KiB/s (9559kB/s)(9372KiB/1004msec) 00:11:33.350 slat (usec): min=6, max=14038, avg=193.62, stdev=944.86 00:11:33.350 clat (usec): min=539, max=63827, avg=24226.66, stdev=7895.19 00:11:33.350 lat (usec): min=3161, max=63848, avg=24420.29, stdev=7985.72 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[10290], 5.00th=[17957], 10.00th=[19792], 20.00th=[20317], 00:11:33.350 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:11:33.350 | 70.00th=[22414], 80.00th=[28181], 90.00th=[36439], 95.00th=[40633], 00:11:33.350 | 99.00th=[49546], 99.50th=[53740], 99.90th=[58459], 99.95th=[58459], 00:11:33.350 | 99.99th=[63701] 00:11:33.350 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:11:33.350 slat (usec): min=11, max=13129, avg=206.00, stdev=1031.16 00:11:33.350 clat (usec): min=9972, max=71896, avg=26929.01, stdev=13876.55 00:11:33.350 lat (usec): min=9999, max=71925, avg=27135.00, stdev=13964.60 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[10290], 5.00th=[10945], 10.00th=[12256], 20.00th=[16057], 00:11:33.350 | 30.00th=[20055], 40.00th=[21103], 50.00th=[25297], 60.00th=[26084], 00:11:33.350 | 70.00th=[26608], 80.00th=[33817], 90.00th=[47449], 95.00th=[61080], 00:11:33.350 | 99.00th=[68682], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:11:33.350 | 99.99th=[71828] 00:11:33.350 bw ( KiB/s): min= 8192, max=12288, per=15.50%, avg=10240.00, stdev=2896.31, samples=2 00:11:33.350 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:33.350 lat (usec) : 750=0.02% 00:11:33.350 lat (msec) : 4=0.41%, 10=0.06%, 20=21.13%, 50=73.00%, 100=5.38% 00:11:33.350 cpu : usr=2.29%, sys=7.88%, ctx=226, majf=0, minf=11 00:11:33.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.350 issued rwts: total=2343,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.350 job1: (groupid=0, jobs=1): err= 0: pid=75373: Fri Jul 26 10:16:46 2024 00:11:33.350 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:33.350 slat (usec): min=8, max=21742, avg=148.94, stdev=1069.10 00:11:33.350 clat (usec): min=10164, max=46200, avg=20727.62, stdev=5383.34 00:11:33.350 lat (usec): min=10178, max=46239, avg=20876.56, stdev=5439.04 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[11338], 5.00th=[14484], 10.00th=[14615], 20.00th=[15008], 00:11:33.350 | 30.00th=[16581], 40.00th=[20055], 50.00th=[20841], 60.00th=[21103], 00:11:33.350 | 70.00th=[21365], 80.00th=[23987], 90.00th=[30802], 95.00th=[31589], 00:11:33.350 | 99.00th=[33424], 99.50th=[33424], 99.90th=[37487], 99.95th=[40633], 00:11:33.350 | 99.99th=[46400] 00:11:33.350 write: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1003msec); 0 zone resets 00:11:33.350 slat (usec): min=6, max=12169, avg=118.34, stdev=728.02 00:11:33.350 clat (usec): min=2784, max=31538, avg=14369.59, stdev=3015.45 00:11:33.350 lat (usec): min=2807, max=31586, avg=14487.93, stdev=2962.91 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[ 7832], 5.00th=[10290], 10.00th=[11076], 20.00th=[11863], 00:11:33.350 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13829], 60.00th=[14746], 00:11:33.350 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:11:33.350 | 99.00th=[22414], 99.50th=[22676], 99.90th=[24511], 99.95th=[30802], 00:11:33.350 | 99.99th=[31589] 00:11:33.350 bw ( KiB/s): min=12296, max=16384, per=21.71%, avg=14340.00, stdev=2890.65, samples=2 00:11:33.350 iops : min= 3074, max= 4096, avg=3585.00, stdev=722.66, samples=2 00:11:33.350 lat (msec) : 4=0.28%, 10=1.75%, 20=65.81%, 50=32.16% 00:11:33.350 cpu : usr=4.39%, sys=9.78%, ctx=159, majf=0, minf=11 00:11:33.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.350 issued rwts: total=3584,3682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.350 job2: (groupid=0, jobs=1): err= 0: pid=75378: Fri Jul 26 10:16:46 2024 00:11:33.350 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:33.350 slat (usec): min=8, max=2878, avg=90.69, stdev=426.34 00:11:33.350 clat (usec): min=8916, max=13607, avg=12231.83, stdev=553.39 00:11:33.350 lat (usec): min=10565, max=15792, avg=12322.53, stdev=362.44 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:11:33.350 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:11:33.350 | 70.00th=[12518], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:11:33.350 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13566], 99.95th=[13566], 00:11:33.350 | 99.99th=[13566] 00:11:33.350 write: IOPS=5201, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1003msec); 0 zone resets 00:11:33.350 slat (usec): min=11, max=2925, avg=95.08, stdev=405.51 00:11:33.350 clat (usec): min=478, max=13431, avg=12269.62, stdev=1066.62 00:11:33.350 lat (usec): min=2408, max=14603, avg=12364.70, stdev=994.87 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[ 6194], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:11:33.350 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:11:33.350 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[12911], 00:11:33.350 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:11:33.350 | 99.99th=[13435] 00:11:33.350 bw ( KiB/s): min=20480, max=20480, per=31.01%, avg=20480.00, stdev= 0.00, samples=2 00:11:33.350 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:33.350 lat (usec) : 500=0.01% 00:11:33.350 lat (msec) : 4=0.31%, 10=2.13%, 20=97.55% 00:11:33.350 cpu : usr=5.19%, sys=13.57%, ctx=373, majf=0, minf=6 00:11:33.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.350 issued rwts: total=5120,5217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.350 job3: (groupid=0, jobs=1): err= 0: pid=75379: Fri Jul 26 10:16:46 2024 00:11:33.350 read: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1001msec) 00:11:33.350 slat (usec): min=5, max=3818, avg=92.93, stdev=407.22 00:11:33.350 clat (usec): min=376, max=15900, avg=12275.58, stdev=1492.85 00:11:33.350 lat (usec): min=2469, max=17565, avg=12368.52, stdev=1501.44 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[ 6259], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:11:33.350 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:11:33.350 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:11:33.350 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15533], 99.95th=[15795], 00:11:33.350 | 99.99th=[15926] 00:11:33.350 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:33.350 slat (usec): min=11, max=3525, avg=96.47, stdev=444.03 00:11:33.350 clat (usec): min=9212, max=16450, avg=12683.06, stdev=836.15 00:11:33.350 lat (usec): min=9236, max=16496, avg=12779.53, stdev=928.19 00:11:33.350 clat percentiles (usec): 00:11:33.350 | 1.00th=[10683], 5.00th=[11731], 10.00th=[11863], 20.00th=[12256], 00:11:33.350 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:11:33.350 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[14222], 00:11:33.350 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16319], 99.95th=[16450], 00:11:33.350 | 99.99th=[16450] 00:11:33.350 bw ( KiB/s): min=20480, max=20480, per=31.01%, avg=20480.00, stdev= 0.00, samples=1 00:11:33.350 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:33.350 lat (usec) : 500=0.01% 00:11:33.350 lat (msec) : 4=0.35%, 10=1.28%, 20=98.36% 00:11:33.350 cpu : usr=4.60%, sys=14.40%, ctx=432, majf=0, minf=7 00:11:33.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.350 issued rwts: total=5014,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.350 00:11:33.350 Run status group 0 (all jobs): 00:11:33.350 READ: bw=62.5MiB/s (65.5MB/s), 9335KiB/s-19.9MiB/s (9559kB/s-20.9MB/s), io=62.7MiB (65.8MB), run=1001-1004msec 00:11:33.350 WRITE: bw=64.5MiB/s (67.6MB/s), 9.96MiB/s-20.3MiB/s (10.4MB/s-21.3MB/s), io=64.8MiB (67.9MB), run=1001-1004msec 00:11:33.350 00:11:33.350 Disk stats (read/write): 00:11:33.350 nvme0n1: ios=2026/2048, merge=0/0, ticks=15940/18631, in_queue=34571, util=89.98% 00:11:33.350 nvme0n2: ios=3121/3328, merge=0/0, ticks=58487/45036, in_queue=103523, util=88.90% 00:11:33.350 nvme0n3: ios=4358/4608, merge=0/0, ticks=11879/12407, in_queue=24286, util=89.34% 00:11:33.350 nvme0n4: ios=4203/4608, merge=0/0, ticks=16505/16600, in_queue=33105, util=89.90% 00:11:33.350 10:16:46 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:33.350 [global] 00:11:33.350 thread=1 00:11:33.350 invalidate=1 00:11:33.350 rw=randwrite 00:11:33.350 time_based=1 00:11:33.350 runtime=1 00:11:33.350 ioengine=libaio 00:11:33.350 direct=1 00:11:33.350 bs=4096 00:11:33.350 iodepth=128 00:11:33.350 norandommap=0 00:11:33.350 numjobs=1 00:11:33.350 00:11:33.350 verify_dump=1 00:11:33.350 verify_backlog=512 00:11:33.350 verify_state_save=0 00:11:33.350 do_verify=1 00:11:33.351 verify=crc32c-intel 00:11:33.351 [job0] 00:11:33.351 filename=/dev/nvme0n1 00:11:33.351 [job1] 00:11:33.351 filename=/dev/nvme0n2 00:11:33.351 [job2] 00:11:33.351 filename=/dev/nvme0n3 00:11:33.351 [job3] 00:11:33.351 filename=/dev/nvme0n4 00:11:33.351 Could not set queue depth (nvme0n1) 00:11:33.351 Could not set queue depth (nvme0n2) 00:11:33.351 Could not set queue depth (nvme0n3) 00:11:33.351 Could not set queue depth (nvme0n4) 00:11:33.351 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.351 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.351 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.351 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.351 fio-3.35 00:11:33.351 Starting 4 threads 00:11:34.727 00:11:34.727 job0: (groupid=0, jobs=1): err= 0: pid=75439: Fri Jul 26 10:16:47 2024 00:11:34.727 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:34.727 slat (usec): min=3, max=12002, avg=90.01, stdev=601.52 00:11:34.727 clat (usec): min=6775, max=24189, avg=12379.41, stdev=2280.42 00:11:34.727 lat (usec): min=6786, max=27943, avg=12469.42, stdev=2306.01 00:11:34.727 clat percentiles (usec): 00:11:34.727 | 1.00th=[ 6915], 5.00th=[ 8455], 10.00th=[10814], 20.00th=[11469], 00:11:34.727 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:11:34.727 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[17171], 00:11:34.727 | 99.00th=[22152], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:11:34.727 | 99.99th=[24249] 00:11:34.727 write: IOPS=5363, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1002msec); 0 zone resets 00:11:34.727 slat (usec): min=5, max=9319, avg=94.10, stdev=564.88 00:11:34.727 clat (usec): min=395, max=24123, avg=11831.81, stdev=1687.98 00:11:34.727 lat (usec): min=4448, max=24129, avg=11925.92, stdev=1634.78 00:11:34.727 clat percentiles (usec): 00:11:34.727 | 1.00th=[ 5800], 5.00th=[ 8586], 10.00th=[10683], 20.00th=[11207], 00:11:34.727 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:11:34.727 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[15270], 00:11:34.727 | 99.00th=[16057], 99.50th=[16188], 99.90th=[20579], 99.95th=[20579], 00:11:34.727 | 99.99th=[24249] 00:11:34.727 bw ( KiB/s): min=22475, max=22475, per=28.02%, avg=22475.00, stdev= 0.00, samples=1 00:11:34.727 iops : min= 5618, max= 5618, avg=5618.00, stdev= 0.00, samples=1 00:11:34.727 lat (usec) : 500=0.01% 00:11:34.727 lat (msec) : 10=7.00%, 20=91.81%, 50=1.17% 00:11:34.727 cpu : usr=3.90%, sys=12.09%, ctx=291, majf=0, minf=9 00:11:34.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:34.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.727 issued rwts: total=5120,5374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.727 job1: (groupid=0, jobs=1): err= 0: pid=75440: Fri Jul 26 10:16:47 2024 00:11:34.727 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:34.727 slat (usec): min=8, max=7186, avg=87.96, stdev=552.06 00:11:34.727 clat (usec): min=6394, max=21382, avg=12205.72, stdev=1615.77 00:11:34.727 lat (usec): min=6435, max=24124, avg=12293.68, stdev=1627.09 00:11:34.727 clat percentiles (usec): 00:11:34.727 | 1.00th=[ 7177], 5.00th=[10683], 10.00th=[11207], 20.00th=[11469], 00:11:34.727 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12387], 00:11:34.727 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13304], 95.00th=[13698], 00:11:34.727 | 99.00th=[19792], 99.50th=[19792], 99.90th=[21365], 99.95th=[21365], 00:11:34.727 | 99.99th=[21365] 00:11:34.727 write: IOPS=5512, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec); 0 zone resets 00:11:34.727 slat (usec): min=10, max=8948, avg=92.26, stdev=548.82 00:11:34.727 clat (usec): min=285, max=16498, avg=11662.75, stdev=1313.71 00:11:34.727 lat (usec): min=4751, max=16693, avg=11755.01, stdev=1224.53 00:11:34.727 clat percentiles (usec): 00:11:34.727 | 1.00th=[ 6063], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:11:34.727 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:11:34.727 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:11:34.727 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16450], 99.95th=[16450], 00:11:34.727 | 99.99th=[16450] 00:11:34.727 bw ( KiB/s): min=22080, max=22080, per=27.53%, avg=22080.00, stdev= 0.00, samples=1 00:11:34.727 iops : min= 5520, max= 5520, avg=5520.00, stdev= 0.00, samples=1 00:11:34.727 lat (usec) : 500=0.01% 00:11:34.727 lat (msec) : 10=4.04%, 20=95.73%, 50=0.23% 00:11:34.727 cpu : usr=3.60%, sys=15.28%, ctx=227, majf=0, minf=17 00:11:34.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:34.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.727 issued rwts: total=5120,5524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.728 job2: (groupid=0, jobs=1): err= 0: pid=75441: Fri Jul 26 10:16:47 2024 00:11:34.728 read: IOPS=4559, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec) 00:11:34.728 slat (usec): min=9, max=10002, avg=101.04, stdev=654.92 00:11:34.728 clat (usec): min=2741, max=24653, avg=13836.26, stdev=2140.99 00:11:34.728 lat (usec): min=2754, max=27908, avg=13937.30, stdev=2166.21 00:11:34.728 clat percentiles (usec): 00:11:34.728 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[12911], 20.00th=[13304], 00:11:34.728 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:11:34.728 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15270], 95.00th=[15926], 00:11:34.728 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:11:34.728 | 99.99th=[24773] 00:11:34.728 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:34.728 slat (usec): min=11, max=10891, avg=108.87, stdev=679.81 00:11:34.728 clat (usec): min=7037, max=20357, avg=13855.27, stdev=1403.84 00:11:34.728 lat (usec): min=9175, max=20384, avg=13964.14, stdev=1270.79 00:11:34.728 clat percentiles (usec): 00:11:34.728 | 1.00th=[ 8848], 5.00th=[12125], 10.00th=[12518], 20.00th=[13173], 00:11:34.728 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:11:34.728 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15401], 00:11:34.728 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:11:34.728 | 99.99th=[20317] 00:11:34.728 bw ( KiB/s): min=16904, max=19920, per=22.95%, avg=18412.00, stdev=2132.63, samples=2 00:11:34.728 iops : min= 4226, max= 4980, avg=4603.00, stdev=533.16, samples=2 00:11:34.728 lat (msec) : 4=0.38%, 10=3.91%, 20=94.30%, 50=1.41% 00:11:34.728 cpu : usr=4.49%, sys=12.18%, ctx=198, majf=0, minf=7 00:11:34.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:34.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.728 issued rwts: total=4573,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.728 job3: (groupid=0, jobs=1): err= 0: pid=75442: Fri Jul 26 10:16:47 2024 00:11:34.728 read: IOPS=4481, BW=17.5MiB/s (18.4MB/s)(17.5MiB/1002msec) 00:11:34.728 slat (usec): min=6, max=10635, avg=105.07, stdev=676.28 00:11:34.728 clat (usec): min=924, max=25877, avg=14103.51, stdev=2488.15 00:11:34.728 lat (usec): min=2041, max=27442, avg=14208.59, stdev=2493.79 00:11:34.728 clat percentiles (usec): 00:11:34.728 | 1.00th=[ 6849], 5.00th=[ 8586], 10.00th=[12911], 20.00th=[13566], 00:11:34.728 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:11:34.728 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:11:34.728 | 99.00th=[23200], 99.50th=[24249], 99.90th=[25822], 99.95th=[25822], 00:11:34.728 | 99.99th=[25822] 00:11:34.728 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:34.728 slat (usec): min=5, max=11976, avg=106.70, stdev=642.53 00:11:34.728 clat (usec): min=2267, max=25812, avg=13809.52, stdev=1831.38 00:11:34.728 lat (usec): min=2280, max=25822, avg=13916.21, stdev=1751.35 00:11:34.728 clat percentiles (usec): 00:11:34.728 | 1.00th=[ 6259], 5.00th=[12256], 10.00th=[12780], 20.00th=[13173], 00:11:34.728 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:11:34.728 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15139], 95.00th=[15533], 00:11:34.728 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:34.728 | 99.99th=[25822] 00:11:34.728 bw ( KiB/s): min=19409, max=19409, per=24.20%, avg=19409.00, stdev= 0.00, samples=1 00:11:34.728 iops : min= 4852, max= 4852, avg=4852.00, stdev= 0.00, samples=1 00:11:34.728 lat (usec) : 1000=0.01% 00:11:34.728 lat (msec) : 4=0.49%, 10=4.52%, 20=92.42%, 50=2.56% 00:11:34.728 cpu : usr=3.90%, sys=12.69%, ctx=253, majf=0, minf=17 00:11:34.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:34.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.728 issued rwts: total=4490,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.728 00:11:34.728 Run status group 0 (all jobs): 00:11:34.728 READ: bw=75.2MiB/s (78.8MB/s), 17.5MiB/s-20.0MiB/s (18.4MB/s-20.9MB/s), io=75.4MiB (79.1MB), run=1002-1003msec 00:11:34.728 WRITE: bw=78.3MiB/s (82.1MB/s), 17.9MiB/s-21.5MiB/s (18.8MB/s-22.6MB/s), io=78.6MiB (82.4MB), run=1002-1003msec 00:11:34.728 00:11:34.728 Disk stats (read/write): 00:11:34.728 nvme0n1: ios=4462/4608, merge=0/0, ticks=50342/50050, in_queue=100392, util=87.07% 00:11:34.728 nvme0n2: ios=4490/4608, merge=0/0, ticks=50745/49960, in_queue=100705, util=88.26% 00:11:34.728 nvme0n3: ios=3710/4096, merge=0/0, ticks=48691/52716, in_queue=101407, util=89.13% 00:11:34.728 nvme0n4: ios=3651/4096, merge=0/0, ticks=48534/52813, in_queue=101347, util=89.59% 00:11:34.728 10:16:47 -- target/fio.sh@55 -- # sync 00:11:34.728 10:16:47 -- target/fio.sh@59 -- # fio_pid=75455 00:11:34.728 10:16:47 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:34.728 10:16:47 -- target/fio.sh@61 -- # sleep 3 00:11:34.728 [global] 00:11:34.728 thread=1 00:11:34.728 invalidate=1 00:11:34.728 rw=read 00:11:34.728 time_based=1 00:11:34.728 runtime=10 00:11:34.728 ioengine=libaio 00:11:34.728 direct=1 00:11:34.728 bs=4096 00:11:34.728 iodepth=1 00:11:34.728 norandommap=1 00:11:34.728 numjobs=1 00:11:34.728 00:11:34.728 [job0] 00:11:34.728 filename=/dev/nvme0n1 00:11:34.728 [job1] 00:11:34.728 filename=/dev/nvme0n2 00:11:34.728 [job2] 00:11:34.728 filename=/dev/nvme0n3 00:11:34.728 [job3] 00:11:34.728 filename=/dev/nvme0n4 00:11:34.728 Could not set queue depth (nvme0n1) 00:11:34.728 Could not set queue depth (nvme0n2) 00:11:34.728 Could not set queue depth (nvme0n3) 00:11:34.728 Could not set queue depth (nvme0n4) 00:11:34.728 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.728 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.728 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.728 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.728 fio-3.35 00:11:34.728 Starting 4 threads 00:11:38.016 10:16:50 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:38.016 fio: pid=75498, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.016 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=56487936, buflen=4096 00:11:38.016 10:16:51 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:38.016 fio: pid=75497, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.016 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=65478656, buflen=4096 00:11:38.016 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.016 10:16:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:38.275 fio: pid=75495, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.275 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12849152, buflen=4096 00:11:38.275 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.275 10:16:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:38.534 fio: pid=75496, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.534 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7856128, buflen=4096 00:11:38.534 00:11:38.534 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75495: Fri Jul 26 10:16:51 2024 00:11:38.534 read: IOPS=5666, BW=22.1MiB/s (23.2MB/s)(76.3MiB/3445msec) 00:11:38.534 slat (usec): min=8, max=17028, avg=15.89, stdev=159.53 00:11:38.534 clat (usec): min=125, max=1540, avg=159.42, stdev=31.95 00:11:38.534 lat (usec): min=138, max=17199, avg=175.31, stdev=163.02 00:11:38.534 clat percentiles (usec): 00:11:38.534 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:11:38.534 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:11:38.534 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:11:38.534 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 562], 99.95th=[ 775], 00:11:38.534 | 99.99th=[ 1237] 00:11:38.534 bw ( KiB/s): min=19976, max=23640, per=31.07%, avg=22697.83, stdev=1408.90, samples=6 00:11:38.534 iops : min= 4994, max= 5910, avg=5674.33, stdev=352.26, samples=6 00:11:38.534 lat (usec) : 250=98.98%, 500=0.89%, 750=0.06%, 1000=0.04% 00:11:38.534 lat (msec) : 2=0.03% 00:11:38.534 cpu : usr=1.54%, sys=6.65%, ctx=19526, majf=0, minf=1 00:11:38.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.534 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.534 issued rwts: total=19522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.534 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75496: Fri Jul 26 10:16:51 2024 00:11:38.534 read: IOPS=4944, BW=19.3MiB/s (20.2MB/s)(71.5MiB/3702msec) 00:11:38.534 slat (usec): min=8, max=11207, avg=15.26, stdev=143.18 00:11:38.534 clat (usec): min=2, max=27943, avg=185.74, stdev=227.08 00:11:38.534 lat (usec): min=137, max=27960, avg=201.00, stdev=273.16 00:11:38.534 clat percentiles (usec): 00:11:38.534 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:11:38.534 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:11:38.534 | 70.00th=[ 210], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 253], 00:11:38.534 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 562], 99.95th=[ 1909], 00:11:38.534 | 99.99th=[ 7832] 00:11:38.534 bw ( KiB/s): min=14976, max=23368, per=27.01%, avg=19730.57, stdev=3434.87, samples=7 00:11:38.534 iops : min= 3744, max= 5842, avg=4932.43, stdev=858.64, samples=7 00:11:38.534 lat (usec) : 4=0.01%, 250=93.27%, 500=6.58%, 750=0.05%, 1000=0.01% 00:11:38.535 lat (msec) : 2=0.02%, 4=0.03%, 10=0.02%, 50=0.01% 00:11:38.535 cpu : usr=1.35%, sys=5.89%, ctx=18326, majf=0, minf=1 00:11:38.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 issued rwts: total=18303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.535 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75497: Fri Jul 26 10:16:51 2024 00:11:38.535 read: IOPS=5070, BW=19.8MiB/s (20.8MB/s)(62.4MiB/3153msec) 00:11:38.535 slat (usec): min=8, max=12442, avg=15.87, stdev=120.27 00:11:38.535 clat (usec): min=3, max=3987, avg=179.96, stdev=52.57 00:11:38.535 lat (usec): min=153, max=12695, avg=195.83, stdev=131.90 00:11:38.535 clat percentiles (usec): 00:11:38.535 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:11:38.535 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:11:38.535 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 235], 00:11:38.535 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 424], 99.95th=[ 594], 00:11:38.535 | 99.99th=[ 3228] 00:11:38.535 bw ( KiB/s): min=17696, max=21384, per=28.13%, avg=20550.17, stdev=1407.98, samples=6 00:11:38.535 iops : min= 4424, max= 5346, avg=5137.50, stdev=351.98, samples=6 00:11:38.535 lat (usec) : 4=0.01%, 250=97.88%, 500=2.02%, 750=0.04%, 1000=0.01% 00:11:38.535 lat (msec) : 2=0.01%, 4=0.03% 00:11:38.535 cpu : usr=1.33%, sys=6.44%, ctx=15996, majf=0, minf=1 00:11:38.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 issued rwts: total=15987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.535 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75498: Fri Jul 26 10:16:51 2024 00:11:38.535 read: IOPS=4723, BW=18.4MiB/s (19.3MB/s)(53.9MiB/2920msec) 00:11:38.535 slat (nsec): min=8376, max=97913, avg=14943.91, stdev=4132.85 00:11:38.535 clat (usec): min=133, max=7970, avg=195.43, stdev=111.05 00:11:38.535 lat (usec): min=152, max=7990, avg=210.38, stdev=110.51 00:11:38.535 clat percentiles (usec): 00:11:38.535 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:11:38.535 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 186], 00:11:38.535 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 253], 00:11:38.535 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 644], 99.95th=[ 1827], 00:11:38.535 | 99.99th=[ 7767] 00:11:38.535 bw ( KiB/s): min=15968, max=21456, per=26.72%, avg=19516.20, stdev=2510.38, samples=5 00:11:38.535 iops : min= 3992, max= 5364, avg=4879.00, stdev=627.55, samples=5 00:11:38.535 lat (usec) : 250=93.76%, 500=6.13%, 750=0.03%, 1000=0.01% 00:11:38.535 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:11:38.535 cpu : usr=1.30%, sys=6.47%, ctx=13802, majf=0, minf=1 00:11:38.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.535 issued rwts: total=13792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.535 00:11:38.535 Run status group 0 (all jobs): 00:11:38.535 READ: bw=71.3MiB/s (74.8MB/s), 18.4MiB/s-22.1MiB/s (19.3MB/s-23.2MB/s), io=264MiB (277MB), run=2920-3702msec 00:11:38.535 00:11:38.535 Disk stats (read/write): 00:11:38.535 nvme0n1: ios=19064/0, merge=0/0, ticks=3088/0, in_queue=3088, util=95.31% 00:11:38.535 nvme0n2: ios=17799/0, merge=0/0, ticks=3266/0, in_queue=3266, util=95.56% 00:11:38.535 nvme0n3: ios=15860/0, merge=0/0, ticks=2898/0, in_queue=2898, util=96.15% 00:11:38.535 nvme0n4: ios=13590/0, merge=0/0, ticks=2628/0, in_queue=2628, util=96.39% 00:11:38.535 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.535 10:16:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:38.794 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.794 10:16:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:39.053 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.053 10:16:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:39.312 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.312 10:16:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:39.571 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.571 10:16:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:39.830 10:16:53 -- target/fio.sh@69 -- # fio_status=0 00:11:39.830 10:16:53 -- target/fio.sh@70 -- # wait 75455 00:11:39.830 10:16:53 -- target/fio.sh@70 -- # fio_status=4 00:11:39.830 10:16:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.830 10:16:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.830 10:16:53 -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.830 10:16:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:39.830 10:16:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.830 10:16:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:39.830 10:16:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.830 nvmf hotplug test: fio failed as expected 00:11:39.830 10:16:53 -- common/autotest_common.sh@1210 -- # return 0 00:11:39.830 10:16:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:39.830 10:16:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:39.830 10:16:53 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.088 10:16:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:40.088 10:16:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:40.088 10:16:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:40.088 10:16:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:40.088 10:16:53 -- target/fio.sh@91 -- # nvmftestfini 00:11:40.088 10:16:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:40.088 10:16:53 -- nvmf/common.sh@116 -- # sync 00:11:40.088 10:16:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:40.088 10:16:53 -- nvmf/common.sh@119 -- # set +e 00:11:40.088 10:16:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:40.088 10:16:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:40.088 rmmod nvme_tcp 00:11:40.088 rmmod nvme_fabrics 00:11:40.088 rmmod nvme_keyring 00:11:40.088 10:16:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:40.346 10:16:53 -- nvmf/common.sh@123 -- # set -e 00:11:40.346 10:16:53 -- nvmf/common.sh@124 -- # return 0 00:11:40.346 10:16:53 -- nvmf/common.sh@477 -- # '[' -n 75069 ']' 00:11:40.346 10:16:53 -- nvmf/common.sh@478 -- # killprocess 75069 00:11:40.346 10:16:53 -- common/autotest_common.sh@926 -- # '[' -z 75069 ']' 00:11:40.346 10:16:53 -- common/autotest_common.sh@930 -- # kill -0 75069 00:11:40.346 10:16:53 -- common/autotest_common.sh@931 -- # uname 00:11:40.346 10:16:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:40.346 10:16:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75069 00:11:40.346 killing process with pid 75069 00:11:40.346 10:16:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:40.346 10:16:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:40.346 10:16:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75069' 00:11:40.346 10:16:53 -- common/autotest_common.sh@945 -- # kill 75069 00:11:40.346 10:16:53 -- common/autotest_common.sh@950 -- # wait 75069 00:11:40.346 10:16:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:40.346 10:16:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:40.346 10:16:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:40.346 10:16:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.346 10:16:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:40.346 10:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.346 10:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.346 10:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.606 10:16:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:40.606 ************************************ 00:11:40.606 END TEST nvmf_fio_target 00:11:40.606 ************************************ 00:11:40.606 00:11:40.606 real 0m19.210s 00:11:40.606 user 1m12.098s 00:11:40.606 sys 0m10.431s 00:11:40.606 10:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.606 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:11:40.606 10:16:53 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:40.606 10:16:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:40.606 10:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.606 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:11:40.606 ************************************ 00:11:40.606 START TEST nvmf_bdevio 00:11:40.606 ************************************ 00:11:40.606 10:16:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:40.606 * Looking for test storage... 00:11:40.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:40.606 10:16:53 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.606 10:16:53 -- nvmf/common.sh@7 -- # uname -s 00:11:40.606 10:16:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.606 10:16:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.606 10:16:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.606 10:16:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.606 10:16:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.606 10:16:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.606 10:16:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.606 10:16:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.606 10:16:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.606 10:16:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:11:40.606 10:16:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:11:40.606 10:16:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.606 10:16:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.606 10:16:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.606 10:16:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.606 10:16:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.606 10:16:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.606 10:16:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.606 10:16:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.606 10:16:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.606 10:16:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.606 10:16:53 -- paths/export.sh@5 -- # export PATH 00:11:40.606 10:16:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.606 10:16:53 -- nvmf/common.sh@46 -- # : 0 00:11:40.606 10:16:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:40.606 10:16:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:40.606 10:16:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:40.606 10:16:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.606 10:16:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.606 10:16:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:40.606 10:16:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:40.606 10:16:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:40.606 10:16:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.606 10:16:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:40.606 10:16:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:40.606 10:16:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:40.606 10:16:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.606 10:16:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:40.606 10:16:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:40.606 10:16:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:40.606 10:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.606 10:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.606 10:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.606 10:16:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:40.606 10:16:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:40.606 10:16:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.606 10:16:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.606 10:16:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:40.606 10:16:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:40.606 10:16:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:40.606 10:16:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:40.606 10:16:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:40.606 10:16:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.606 10:16:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:40.606 10:16:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:40.606 10:16:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:40.606 10:16:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:40.606 10:16:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:40.606 10:16:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:40.606 Cannot find device "nvmf_tgt_br" 00:11:40.606 10:16:54 -- nvmf/common.sh@154 -- # true 00:11:40.606 10:16:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.606 Cannot find device "nvmf_tgt_br2" 00:11:40.607 10:16:54 -- nvmf/common.sh@155 -- # true 00:11:40.607 10:16:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:40.607 10:16:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:40.607 Cannot find device "nvmf_tgt_br" 00:11:40.607 10:16:54 -- nvmf/common.sh@157 -- # true 00:11:40.607 10:16:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:40.607 Cannot find device "nvmf_tgt_br2" 00:11:40.607 10:16:54 -- nvmf/common.sh@158 -- # true 00:11:40.607 10:16:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:40.879 10:16:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:40.879 10:16:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.879 10:16:54 -- nvmf/common.sh@161 -- # true 00:11:40.879 10:16:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.879 10:16:54 -- nvmf/common.sh@162 -- # true 00:11:40.879 10:16:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:40.879 10:16:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:40.879 10:16:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:40.879 10:16:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:40.879 10:16:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:40.879 10:16:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:40.879 10:16:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:40.879 10:16:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:40.879 10:16:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:40.879 10:16:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:40.879 10:16:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:40.879 10:16:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:40.879 10:16:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:40.879 10:16:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:40.879 10:16:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:40.879 10:16:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:40.879 10:16:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:40.879 10:16:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:40.879 10:16:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:40.879 10:16:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:40.879 10:16:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:40.879 10:16:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:40.879 10:16:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:40.879 10:16:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:40.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:40.879 00:11:40.879 --- 10.0.0.2 ping statistics --- 00:11:40.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.879 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:40.879 10:16:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:40.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:40.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:11:40.879 00:11:40.879 --- 10.0.0.3 ping statistics --- 00:11:40.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.879 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:11:40.879 10:16:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:40.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:40.879 00:11:40.879 --- 10.0.0.1 ping statistics --- 00:11:40.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.879 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:40.879 10:16:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.879 10:16:54 -- nvmf/common.sh@421 -- # return 0 00:11:40.879 10:16:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:40.879 10:16:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.879 10:16:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:40.879 10:16:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:40.879 10:16:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.879 10:16:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:40.879 10:16:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:41.138 10:16:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:41.138 10:16:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:41.138 10:16:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:41.138 10:16:54 -- common/autotest_common.sh@10 -- # set +x 00:11:41.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.138 10:16:54 -- nvmf/common.sh@469 -- # nvmfpid=75760 00:11:41.138 10:16:54 -- nvmf/common.sh@470 -- # waitforlisten 75760 00:11:41.138 10:16:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:41.138 10:16:54 -- common/autotest_common.sh@819 -- # '[' -z 75760 ']' 00:11:41.138 10:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.138 10:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:41.138 10:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.138 10:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:41.138 10:16:54 -- common/autotest_common.sh@10 -- # set +x 00:11:41.138 [2024-07-26 10:16:54.401267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:41.138 [2024-07-26 10:16:54.401401] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.138 [2024-07-26 10:16:54.544789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.397 [2024-07-26 10:16:54.653025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.397 [2024-07-26 10:16:54.653462] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.397 [2024-07-26 10:16:54.653668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.397 [2024-07-26 10:16:54.653896] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.397 [2024-07-26 10:16:54.654244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.397 [2024-07-26 10:16:54.654317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:41.397 [2024-07-26 10:16:54.654429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.397 [2024-07-26 10:16:54.654417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:41.964 10:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:41.964 10:16:55 -- common/autotest_common.sh@852 -- # return 0 00:11:41.964 10:16:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:41.964 10:16:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:41.964 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 10:16:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.222 10:16:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.222 10:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.222 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 [2024-07-26 10:16:55.437079] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.222 10:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.222 10:16:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.222 10:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.222 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 Malloc0 00:11:42.222 10:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.222 10:16:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.222 10:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.222 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 10:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.222 10:16:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.222 10:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.222 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 10:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.222 10:16:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.222 10:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.222 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 [2024-07-26 10:16:55.509339] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.222 10:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.222 10:16:55 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:42.222 10:16:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:42.222 10:16:55 -- nvmf/common.sh@520 -- # config=() 00:11:42.222 10:16:55 -- nvmf/common.sh@520 -- # local subsystem config 00:11:42.222 10:16:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:42.222 10:16:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:42.222 { 00:11:42.222 "params": { 00:11:42.222 "name": "Nvme$subsystem", 00:11:42.222 "trtype": "$TEST_TRANSPORT", 00:11:42.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.222 "adrfam": "ipv4", 00:11:42.222 "trsvcid": "$NVMF_PORT", 00:11:42.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.222 "hdgst": ${hdgst:-false}, 00:11:42.222 "ddgst": ${ddgst:-false} 00:11:42.222 }, 00:11:42.222 "method": "bdev_nvme_attach_controller" 00:11:42.222 } 00:11:42.222 EOF 00:11:42.223 )") 00:11:42.223 10:16:55 -- nvmf/common.sh@542 -- # cat 00:11:42.223 10:16:55 -- nvmf/common.sh@544 -- # jq . 00:11:42.223 10:16:55 -- nvmf/common.sh@545 -- # IFS=, 00:11:42.223 10:16:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:42.223 "params": { 00:11:42.223 "name": "Nvme1", 00:11:42.223 "trtype": "tcp", 00:11:42.223 "traddr": "10.0.0.2", 00:11:42.223 "adrfam": "ipv4", 00:11:42.223 "trsvcid": "4420", 00:11:42.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.223 "hdgst": false, 00:11:42.223 "ddgst": false 00:11:42.223 }, 00:11:42.223 "method": "bdev_nvme_attach_controller" 00:11:42.223 }' 00:11:42.223 [2024-07-26 10:16:55.562542] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:42.223 [2024-07-26 10:16:55.562643] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75796 ] 00:11:42.481 [2024-07-26 10:16:55.701901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.481 [2024-07-26 10:16:55.801942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.481 [2024-07-26 10:16:55.802105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.481 [2024-07-26 10:16:55.802113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.740 [2024-07-26 10:16:55.976531] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:42.740 [2024-07-26 10:16:55.976870] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:42.740 I/O targets: 00:11:42.740 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:42.740 00:11:42.740 00:11:42.740 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.740 http://cunit.sourceforge.net/ 00:11:42.740 00:11:42.740 00:11:42.740 Suite: bdevio tests on: Nvme1n1 00:11:42.740 Test: blockdev write read block ...passed 00:11:42.740 Test: blockdev write zeroes read block ...passed 00:11:42.740 Test: blockdev write zeroes read no split ...passed 00:11:42.740 Test: blockdev write zeroes read split ...passed 00:11:42.740 Test: blockdev write zeroes read split partial ...passed 00:11:42.740 Test: blockdev reset ...[2024-07-26 10:16:56.010974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:42.740 [2024-07-26 10:16:56.011235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0720 (9): Bad file descriptor 00:11:42.740 passed 00:11:42.740 Test: blockdev write read 8 blocks ...[2024-07-26 10:16:56.025709] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:42.740 passed 00:11:42.740 Test: blockdev write read size > 128k ...passed 00:11:42.740 Test: blockdev write read invalid size ...passed 00:11:42.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.740 Test: blockdev write read max offset ...passed 00:11:42.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.740 Test: blockdev writev readv 8 blocks ...passed 00:11:42.740 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.740 Test: blockdev writev readv block ...passed 00:11:42.740 Test: blockdev writev readv size > 128k ...passed 00:11:42.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.740 Test: blockdev comparev and writev ...[2024-07-26 10:16:56.036006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.036059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.036091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.036115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.036426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.036469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.037257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.037283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.037296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.037608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.037635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.037656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.740 [2024-07-26 10:16:56.037669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:42.740 passed 00:11:42.740 Test: blockdev nvme passthru rw ...passed 00:11:42.740 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.740 Test: blockdev nvme admin passthru ...[2024-07-26 10:16:56.039248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:42.740 [2024-07-26 10:16:56.039293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.039426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:42.740 [2024-07-26 10:16:56.039446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:42.740 [2024-07-26 10:16:56.039569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:42.740 [2024-07-26 10:16:56.039613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:42.741 [2024-07-26 10:16:56.039739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:42.741 [2024-07-26 10:16:56.039758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:42.741 passed 00:11:42.741 Test: blockdev copy ...passed 00:11:42.741 00:11:42.741 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.741 suites 1 1 n/a 0 0 00:11:42.741 tests 23 23 23 0 0 00:11:42.741 asserts 152 152 152 0 n/a 00:11:42.741 00:11:42.741 Elapsed time = 0.143 seconds 00:11:42.999 10:16:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.999 10:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.999 10:16:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.999 10:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.999 10:16:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:42.999 10:16:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:42.999 10:16:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:42.999 10:16:56 -- nvmf/common.sh@116 -- # sync 00:11:42.999 10:16:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:42.999 10:16:56 -- nvmf/common.sh@119 -- # set +e 00:11:42.999 10:16:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:42.999 10:16:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:42.999 rmmod nvme_tcp 00:11:42.999 rmmod nvme_fabrics 00:11:42.999 rmmod nvme_keyring 00:11:42.999 10:16:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:42.999 10:16:56 -- nvmf/common.sh@123 -- # set -e 00:11:43.000 10:16:56 -- nvmf/common.sh@124 -- # return 0 00:11:43.000 10:16:56 -- nvmf/common.sh@477 -- # '[' -n 75760 ']' 00:11:43.000 10:16:56 -- nvmf/common.sh@478 -- # killprocess 75760 00:11:43.000 10:16:56 -- common/autotest_common.sh@926 -- # '[' -z 75760 ']' 00:11:43.000 10:16:56 -- common/autotest_common.sh@930 -- # kill -0 75760 00:11:43.000 10:16:56 -- common/autotest_common.sh@931 -- # uname 00:11:43.000 10:16:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:43.000 10:16:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75760 00:11:43.000 killing process with pid 75760 00:11:43.000 10:16:56 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:43.000 10:16:56 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:43.000 10:16:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75760' 00:11:43.000 10:16:56 -- common/autotest_common.sh@945 -- # kill 75760 00:11:43.000 10:16:56 -- common/autotest_common.sh@950 -- # wait 75760 00:11:43.258 10:16:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:43.258 10:16:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:43.258 10:16:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:43.258 10:16:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.258 10:16:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:43.258 10:16:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.258 10:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.258 10:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.258 10:16:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:43.258 ************************************ 00:11:43.258 END TEST nvmf_bdevio 00:11:43.258 ************************************ 00:11:43.258 00:11:43.258 real 0m2.760s 00:11:43.258 user 0m9.160s 00:11:43.258 sys 0m0.752s 00:11:43.258 10:16:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.258 10:16:56 -- common/autotest_common.sh@10 -- # set +x 00:11:43.258 10:16:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:43.258 10:16:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:43.258 10:16:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:43.258 10:16:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.258 10:16:56 -- common/autotest_common.sh@10 -- # set +x 00:11:43.258 ************************************ 00:11:43.258 START TEST nvmf_bdevio_no_huge 00:11:43.258 ************************************ 00:11:43.258 10:16:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:43.532 * Looking for test storage... 00:11:43.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.532 10:16:56 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.532 10:16:56 -- nvmf/common.sh@7 -- # uname -s 00:11:43.532 10:16:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.532 10:16:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.532 10:16:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.532 10:16:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.532 10:16:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.532 10:16:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.532 10:16:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.532 10:16:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.532 10:16:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.532 10:16:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:11:43.532 10:16:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:11:43.532 10:16:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.532 10:16:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.532 10:16:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.532 10:16:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.532 10:16:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.532 10:16:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.532 10:16:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.532 10:16:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.532 10:16:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.532 10:16:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.532 10:16:56 -- paths/export.sh@5 -- # export PATH 00:11:43.532 10:16:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.532 10:16:56 -- nvmf/common.sh@46 -- # : 0 00:11:43.532 10:16:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:43.532 10:16:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:43.532 10:16:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:43.532 10:16:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.532 10:16:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.532 10:16:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:43.532 10:16:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:43.532 10:16:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:43.532 10:16:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.532 10:16:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.532 10:16:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:43.532 10:16:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:43.532 10:16:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.532 10:16:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:43.532 10:16:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:43.532 10:16:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:43.532 10:16:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.532 10:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.532 10:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.532 10:16:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:43.532 10:16:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:43.532 10:16:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.532 10:16:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.532 10:16:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:43.532 10:16:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:43.532 10:16:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.532 10:16:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.532 10:16:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.532 10:16:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.532 10:16:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.532 10:16:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.532 10:16:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.533 10:16:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.533 10:16:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:43.533 10:16:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:43.533 Cannot find device "nvmf_tgt_br" 00:11:43.533 10:16:56 -- nvmf/common.sh@154 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.533 Cannot find device "nvmf_tgt_br2" 00:11:43.533 10:16:56 -- nvmf/common.sh@155 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:43.533 10:16:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:43.533 Cannot find device "nvmf_tgt_br" 00:11:43.533 10:16:56 -- nvmf/common.sh@157 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:43.533 Cannot find device "nvmf_tgt_br2" 00:11:43.533 10:16:56 -- nvmf/common.sh@158 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:43.533 10:16:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:43.533 10:16:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.533 10:16:56 -- nvmf/common.sh@161 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.533 10:16:56 -- nvmf/common.sh@162 -- # true 00:11:43.533 10:16:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.533 10:16:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.533 10:16:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.533 10:16:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.533 10:16:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.533 10:16:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.798 10:16:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.798 10:16:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:43.798 10:16:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:43.798 10:16:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:43.798 10:16:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:43.798 10:16:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:43.798 10:16:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:43.798 10:16:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.798 10:16:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.798 10:16:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.798 10:16:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:43.798 10:16:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:43.798 10:16:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.798 10:16:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.798 10:16:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.798 10:16:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.798 10:16:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.798 10:16:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:43.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:43.798 00:11:43.798 --- 10.0.0.2 ping statistics --- 00:11:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.798 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:43.798 10:16:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:43.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:11:43.798 00:11:43.798 --- 10.0.0.3 ping statistics --- 00:11:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.798 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:43.798 10:16:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:43.798 00:11:43.798 --- 10.0.0.1 ping statistics --- 00:11:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.798 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:43.798 10:16:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.798 10:16:57 -- nvmf/common.sh@421 -- # return 0 00:11:43.798 10:16:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:43.798 10:16:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.798 10:16:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:43.798 10:16:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:43.798 10:16:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.798 10:16:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:43.798 10:16:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:43.798 10:16:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:43.798 10:16:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:43.798 10:16:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:43.798 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:11:43.798 10:16:57 -- nvmf/common.sh@469 -- # nvmfpid=75972 00:11:43.798 10:16:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:43.798 10:16:57 -- nvmf/common.sh@470 -- # waitforlisten 75972 00:11:43.798 10:16:57 -- common/autotest_common.sh@819 -- # '[' -z 75972 ']' 00:11:43.798 10:16:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.798 10:16:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.798 10:16:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.798 10:16:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.798 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:11:43.798 [2024-07-26 10:16:57.182128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:43.798 [2024-07-26 10:16:57.182216] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:44.057 [2024-07-26 10:16:57.318693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.057 [2024-07-26 10:16:57.410395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:44.057 [2024-07-26 10:16:57.410561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.057 [2024-07-26 10:16:57.410573] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.057 [2024-07-26 10:16:57.410582] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.057 [2024-07-26 10:16:57.410760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:44.057 [2024-07-26 10:16:57.411233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:44.057 [2024-07-26 10:16:57.411401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:44.057 [2024-07-26 10:16:57.411409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.992 10:16:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.992 10:16:58 -- common/autotest_common.sh@852 -- # return 0 00:11:44.992 10:16:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:44.992 10:16:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 10:16:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.992 10:16:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.992 10:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 [2024-07-26 10:16:58.151535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.992 10:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.992 10:16:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.992 10:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 Malloc0 00:11:44.992 10:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.992 10:16:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.992 10:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 10:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.992 10:16:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.992 10:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 10:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.992 10:16:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.992 10:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.992 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.992 [2024-07-26 10:16:58.191704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.992 10:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.992 10:16:58 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:44.992 10:16:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:44.992 10:16:58 -- nvmf/common.sh@520 -- # config=() 00:11:44.992 10:16:58 -- nvmf/common.sh@520 -- # local subsystem config 00:11:44.992 10:16:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:44.992 10:16:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:44.992 { 00:11:44.992 "params": { 00:11:44.992 "name": "Nvme$subsystem", 00:11:44.992 "trtype": "$TEST_TRANSPORT", 00:11:44.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:44.992 "adrfam": "ipv4", 00:11:44.992 "trsvcid": "$NVMF_PORT", 00:11:44.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:44.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:44.992 "hdgst": ${hdgst:-false}, 00:11:44.992 "ddgst": ${ddgst:-false} 00:11:44.992 }, 00:11:44.992 "method": "bdev_nvme_attach_controller" 00:11:44.992 } 00:11:44.992 EOF 00:11:44.992 )") 00:11:44.992 10:16:58 -- nvmf/common.sh@542 -- # cat 00:11:44.992 10:16:58 -- nvmf/common.sh@544 -- # jq . 00:11:44.992 10:16:58 -- nvmf/common.sh@545 -- # IFS=, 00:11:44.992 10:16:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:44.992 "params": { 00:11:44.992 "name": "Nvme1", 00:11:44.992 "trtype": "tcp", 00:11:44.992 "traddr": "10.0.0.2", 00:11:44.992 "adrfam": "ipv4", 00:11:44.992 "trsvcid": "4420", 00:11:44.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:44.992 "hdgst": false, 00:11:44.992 "ddgst": false 00:11:44.992 }, 00:11:44.992 "method": "bdev_nvme_attach_controller" 00:11:44.992 }' 00:11:44.992 [2024-07-26 10:16:58.254942] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:44.992 [2024-07-26 10:16:58.255038] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76008 ] 00:11:44.992 [2024-07-26 10:16:58.394987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:45.251 [2024-07-26 10:16:58.490946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.251 [2024-07-26 10:16:58.491072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.251 [2024-07-26 10:16:58.491079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.251 [2024-07-26 10:16:58.645793] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:45.251 [2024-07-26 10:16:58.645854] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:45.251 I/O targets: 00:11:45.251 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:45.251 00:11:45.251 00:11:45.251 CUnit - A unit testing framework for C - Version 2.1-3 00:11:45.251 http://cunit.sourceforge.net/ 00:11:45.251 00:11:45.251 00:11:45.251 Suite: bdevio tests on: Nvme1n1 00:11:45.251 Test: blockdev write read block ...passed 00:11:45.251 Test: blockdev write zeroes read block ...passed 00:11:45.251 Test: blockdev write zeroes read no split ...passed 00:11:45.251 Test: blockdev write zeroes read split ...passed 00:11:45.251 Test: blockdev write zeroes read split partial ...passed 00:11:45.251 Test: blockdev reset ...[2024-07-26 10:16:58.685936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:45.251 [2024-07-26 10:16:58.686039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925590 (9): Bad file descriptor 00:11:45.251 [2024-07-26 10:16:58.703197] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:45.251 passed 00:11:45.251 Test: blockdev write read 8 blocks ...passed 00:11:45.251 Test: blockdev write read size > 128k ...passed 00:11:45.251 Test: blockdev write read invalid size ...passed 00:11:45.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:45.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:45.251 Test: blockdev write read max offset ...passed 00:11:45.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:45.509 Test: blockdev writev readv 8 blocks ...passed 00:11:45.509 Test: blockdev writev readv 30 x 1block ...passed 00:11:45.509 Test: blockdev writev readv block ...passed 00:11:45.509 Test: blockdev writev readv size > 128k ...passed 00:11:45.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:45.509 Test: blockdev comparev and writev ...[2024-07-26 10:16:58.713416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.713460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.713484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.713498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.713812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.713953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.713977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.713990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.714289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.714311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.714331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.714343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.714648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.714675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.714696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.509 [2024-07-26 10:16:58.714708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:45.509 passed 00:11:45.509 Test: blockdev nvme passthru rw ...passed 00:11:45.509 Test: blockdev nvme passthru vendor specific ...[2024-07-26 10:16:58.716031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.509 [2024-07-26 10:16:58.716062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.716383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.509 [2024-07-26 10:16:58.716411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.716680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.509 [2024-07-26 10:16:58.716709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:45.509 [2024-07-26 10:16:58.717019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.509 [2024-07-26 10:16:58.717049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:45.509 passed 00:11:45.509 Test: blockdev nvme admin passthru ...passed 00:11:45.509 Test: blockdev copy ...passed 00:11:45.509 00:11:45.509 Run Summary: Type Total Ran Passed Failed Inactive 00:11:45.509 suites 1 1 n/a 0 0 00:11:45.509 tests 23 23 23 0 0 00:11:45.509 asserts 152 152 152 0 n/a 00:11:45.509 00:11:45.509 Elapsed time = 0.181 seconds 00:11:45.768 10:16:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.768 10:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.768 10:16:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.768 10:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.768 10:16:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:45.768 10:16:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:45.768 10:16:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:45.768 10:16:59 -- nvmf/common.sh@116 -- # sync 00:11:45.768 10:16:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:45.768 10:16:59 -- nvmf/common.sh@119 -- # set +e 00:11:45.768 10:16:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:45.768 10:16:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:45.768 rmmod nvme_tcp 00:11:45.768 rmmod nvme_fabrics 00:11:45.768 rmmod nvme_keyring 00:11:45.768 10:16:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:45.768 10:16:59 -- nvmf/common.sh@123 -- # set -e 00:11:45.768 10:16:59 -- nvmf/common.sh@124 -- # return 0 00:11:45.768 10:16:59 -- nvmf/common.sh@477 -- # '[' -n 75972 ']' 00:11:45.768 10:16:59 -- nvmf/common.sh@478 -- # killprocess 75972 00:11:45.768 10:16:59 -- common/autotest_common.sh@926 -- # '[' -z 75972 ']' 00:11:45.768 10:16:59 -- common/autotest_common.sh@930 -- # kill -0 75972 00:11:45.768 10:16:59 -- common/autotest_common.sh@931 -- # uname 00:11:45.768 10:16:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:45.768 10:16:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75972 00:11:45.768 10:16:59 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:45.768 10:16:59 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:45.768 10:16:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75972' 00:11:45.768 killing process with pid 75972 00:11:45.768 10:16:59 -- common/autotest_common.sh@945 -- # kill 75972 00:11:45.768 10:16:59 -- common/autotest_common.sh@950 -- # wait 75972 00:11:46.335 10:16:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.335 10:16:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.335 10:16:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.335 10:16:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.335 10:16:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.335 10:16:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.335 10:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.335 10:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.335 10:16:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:46.335 00:11:46.335 real 0m2.901s 00:11:46.335 user 0m9.648s 00:11:46.335 sys 0m1.130s 00:11:46.335 10:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.335 ************************************ 00:11:46.335 10:16:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.335 END TEST nvmf_bdevio_no_huge 00:11:46.335 ************************************ 00:11:46.335 10:16:59 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:46.335 10:16:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:46.335 10:16:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.335 10:16:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.335 ************************************ 00:11:46.336 START TEST nvmf_tls 00:11:46.336 ************************************ 00:11:46.336 10:16:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:46.336 * Looking for test storage... 00:11:46.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.336 10:16:59 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.336 10:16:59 -- nvmf/common.sh@7 -- # uname -s 00:11:46.336 10:16:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.336 10:16:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.336 10:16:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.336 10:16:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.336 10:16:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.336 10:16:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.336 10:16:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.336 10:16:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.336 10:16:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.336 10:16:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:11:46.336 10:16:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:11:46.336 10:16:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.336 10:16:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.336 10:16:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.336 10:16:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.336 10:16:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.336 10:16:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.336 10:16:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.336 10:16:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.336 10:16:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.336 10:16:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.336 10:16:59 -- paths/export.sh@5 -- # export PATH 00:11:46.336 10:16:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.336 10:16:59 -- nvmf/common.sh@46 -- # : 0 00:11:46.336 10:16:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:46.336 10:16:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:46.336 10:16:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:46.336 10:16:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.336 10:16:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.336 10:16:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:46.336 10:16:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:46.336 10:16:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:46.336 10:16:59 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.336 10:16:59 -- target/tls.sh@71 -- # nvmftestinit 00:11:46.336 10:16:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:46.336 10:16:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.336 10:16:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:46.336 10:16:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:46.336 10:16:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:46.336 10:16:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.336 10:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.336 10:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.336 10:16:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:46.336 10:16:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:46.336 10:16:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.336 10:16:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.336 10:16:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:46.336 10:16:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:46.336 10:16:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.336 10:16:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.336 10:16:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.336 10:16:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.336 10:16:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.336 10:16:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.336 10:16:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.336 10:16:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.336 10:16:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:46.336 10:16:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:46.336 Cannot find device "nvmf_tgt_br" 00:11:46.336 10:16:59 -- nvmf/common.sh@154 -- # true 00:11:46.336 10:16:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.336 Cannot find device "nvmf_tgt_br2" 00:11:46.336 10:16:59 -- nvmf/common.sh@155 -- # true 00:11:46.336 10:16:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:46.596 10:16:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:46.596 Cannot find device "nvmf_tgt_br" 00:11:46.596 10:16:59 -- nvmf/common.sh@157 -- # true 00:11:46.596 10:16:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:46.596 Cannot find device "nvmf_tgt_br2" 00:11:46.596 10:16:59 -- nvmf/common.sh@158 -- # true 00:11:46.596 10:16:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:46.596 10:16:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:46.596 10:16:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.596 10:16:59 -- nvmf/common.sh@161 -- # true 00:11:46.596 10:16:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.596 10:16:59 -- nvmf/common.sh@162 -- # true 00:11:46.596 10:16:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.596 10:16:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.596 10:16:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.596 10:16:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.596 10:16:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.596 10:16:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.596 10:16:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.596 10:16:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:46.596 10:16:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:46.596 10:16:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:46.596 10:16:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:46.596 10:16:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:46.596 10:16:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:46.596 10:16:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:46.596 10:16:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:46.596 10:16:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:46.596 10:17:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:46.596 10:17:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:46.596 10:17:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:46.596 10:17:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:46.596 10:17:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:46.596 10:17:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:46.855 10:17:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:46.855 10:17:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:46.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:46.855 00:11:46.855 --- 10.0.0.2 ping statistics --- 00:11:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.855 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:46.855 10:17:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:46.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:46.855 00:11:46.855 --- 10.0.0.3 ping statistics --- 00:11:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.855 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:46.855 10:17:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:46.855 00:11:46.855 --- 10.0.0.1 ping statistics --- 00:11:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.855 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:46.855 10:17:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.855 10:17:00 -- nvmf/common.sh@421 -- # return 0 00:11:46.855 10:17:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:46.855 10:17:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.855 10:17:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:46.855 10:17:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:46.855 10:17:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.855 10:17:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:46.855 10:17:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:46.855 10:17:00 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:46.855 10:17:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:46.855 10:17:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:46.855 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.855 10:17:00 -- nvmf/common.sh@469 -- # nvmfpid=76190 00:11:46.855 10:17:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:46.855 10:17:00 -- nvmf/common.sh@470 -- # waitforlisten 76190 00:11:46.855 10:17:00 -- common/autotest_common.sh@819 -- # '[' -z 76190 ']' 00:11:46.855 10:17:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.855 10:17:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.855 10:17:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.855 10:17:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:46.855 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.855 [2024-07-26 10:17:00.141680] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:46.855 [2024-07-26 10:17:00.142378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.855 [2024-07-26 10:17:00.281802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.114 [2024-07-26 10:17:00.376684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.114 [2024-07-26 10:17:00.376853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.114 [2024-07-26 10:17:00.376869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.114 [2024-07-26 10:17:00.376880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.114 [2024-07-26 10:17:00.376916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.049 10:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.049 10:17:01 -- common/autotest_common.sh@852 -- # return 0 00:11:48.049 10:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.049 10:17:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:48.049 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.050 10:17:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.050 10:17:01 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:48.050 10:17:01 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:48.050 true 00:11:48.050 10:17:01 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:48.050 10:17:01 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:48.308 10:17:01 -- target/tls.sh@82 -- # version=0 00:11:48.308 10:17:01 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:48.308 10:17:01 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:48.567 10:17:01 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:48.567 10:17:01 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:48.825 10:17:02 -- target/tls.sh@90 -- # version=13 00:11:48.825 10:17:02 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:48.825 10:17:02 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:49.084 10:17:02 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:49.084 10:17:02 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:49.354 10:17:02 -- target/tls.sh@98 -- # version=7 00:11:49.354 10:17:02 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:49.354 10:17:02 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:49.354 10:17:02 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:49.615 10:17:02 -- target/tls.sh@105 -- # ktls=false 00:11:49.615 10:17:02 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:49.615 10:17:02 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:49.616 10:17:03 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:49.616 10:17:03 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:49.875 10:17:03 -- target/tls.sh@113 -- # ktls=true 00:11:49.875 10:17:03 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:49.875 10:17:03 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:50.133 10:17:03 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:50.134 10:17:03 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:50.392 10:17:03 -- target/tls.sh@121 -- # ktls=false 00:11:50.392 10:17:03 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:50.392 10:17:03 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:50.392 10:17:03 -- target/tls.sh@49 -- # local key hash crc 00:11:50.392 10:17:03 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:50.393 10:17:03 -- target/tls.sh@51 -- # hash=01 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # gzip -1 -c 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # tail -c8 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # head -c 4 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # crc='p$H�' 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:50.393 10:17:03 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:50.393 10:17:03 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:50.393 10:17:03 -- target/tls.sh@49 -- # local key hash crc 00:11:50.393 10:17:03 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:50.393 10:17:03 -- target/tls.sh@51 -- # hash=01 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # gzip -1 -c 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # tail -c8 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # head -c 4 00:11:50.393 10:17:03 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:50.393 10:17:03 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:50.393 10:17:03 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:50.393 10:17:03 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.393 10:17:03 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:50.393 10:17:03 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:50.393 10:17:03 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:50.393 10:17:03 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.393 10:17:03 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:50.393 10:17:03 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:50.651 10:17:04 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:50.909 10:17:04 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.909 10:17:04 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.910 10:17:04 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:51.168 [2024-07-26 10:17:04.553864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.168 10:17:04 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:51.426 10:17:04 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:51.684 [2024-07-26 10:17:04.993950] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:51.684 [2024-07-26 10:17:04.994213] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.684 10:17:05 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:51.943 malloc0 00:11:51.943 10:17:05 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:52.201 10:17:05 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:52.459 10:17:05 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.667 Initializing NVMe Controllers 00:12:04.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:04.667 Initialization complete. Launching workers. 00:12:04.667 ======================================================== 00:12:04.667 Latency(us) 00:12:04.667 Device Information : IOPS MiB/s Average min max 00:12:04.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10234.47 39.98 6254.82 1137.25 9771.16 00:12:04.667 ======================================================== 00:12:04.667 Total : 10234.47 39.98 6254.82 1137.25 9771.16 00:12:04.667 00:12:04.667 10:17:15 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.667 10:17:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:04.667 10:17:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:04.667 10:17:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:04.667 10:17:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:04.667 10:17:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:04.667 10:17:15 -- target/tls.sh@28 -- # bdevperf_pid=76428 00:12:04.667 10:17:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:04.667 10:17:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:04.667 10:17:15 -- target/tls.sh@31 -- # waitforlisten 76428 /var/tmp/bdevperf.sock 00:12:04.667 10:17:15 -- common/autotest_common.sh@819 -- # '[' -z 76428 ']' 00:12:04.667 10:17:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:04.667 10:17:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:04.667 10:17:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:04.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:04.667 10:17:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:04.667 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 [2024-07-26 10:17:15.997812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:04.667 [2024-07-26 10:17:15.997932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76428 ] 00:12:04.667 [2024-07-26 10:17:16.138156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.667 [2024-07-26 10:17:16.240177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.667 10:17:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:04.667 10:17:16 -- common/autotest_common.sh@852 -- # return 0 00:12:04.667 10:17:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.667 [2024-07-26 10:17:17.123724] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.667 TLSTESTn1 00:12:04.667 10:17:17 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:04.667 Running I/O for 10 seconds... 00:12:14.651 00:12:14.651 Latency(us) 00:12:14.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.651 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:14.651 Verification LBA range: start 0x0 length 0x2000 00:12:14.651 TLSTESTn1 : 10.02 5876.30 22.95 0.00 0.00 21745.45 5034.36 21448.15 00:12:14.651 =================================================================================================================== 00:12:14.651 Total : 5876.30 22.95 0.00 0.00 21745.45 5034.36 21448.15 00:12:14.651 0 00:12:14.651 10:17:27 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.651 10:17:27 -- target/tls.sh@45 -- # killprocess 76428 00:12:14.651 10:17:27 -- common/autotest_common.sh@926 -- # '[' -z 76428 ']' 00:12:14.651 10:17:27 -- common/autotest_common.sh@930 -- # kill -0 76428 00:12:14.651 10:17:27 -- common/autotest_common.sh@931 -- # uname 00:12:14.651 10:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:14.651 10:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76428 00:12:14.651 killing process with pid 76428 00:12:14.651 Received shutdown signal, test time was about 10.000000 seconds 00:12:14.651 00:12:14.651 Latency(us) 00:12:14.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.651 =================================================================================================================== 00:12:14.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.651 10:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:14.651 10:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:14.651 10:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76428' 00:12:14.651 10:17:27 -- common/autotest_common.sh@945 -- # kill 76428 00:12:14.651 10:17:27 -- common/autotest_common.sh@950 -- # wait 76428 00:12:14.652 10:17:27 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:14.652 10:17:27 -- common/autotest_common.sh@640 -- # local es=0 00:12:14.652 10:17:27 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:14.652 10:17:27 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:14.652 10:17:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:14.652 10:17:27 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:14.652 10:17:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:14.652 10:17:27 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:14.652 10:17:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:14.652 10:17:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:14.652 10:17:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:14.652 10:17:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:14.652 10:17:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.652 10:17:27 -- target/tls.sh@28 -- # bdevperf_pid=76568 00:12:14.652 10:17:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:14.652 10:17:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:14.652 10:17:27 -- target/tls.sh@31 -- # waitforlisten 76568 /var/tmp/bdevperf.sock 00:12:14.652 10:17:27 -- common/autotest_common.sh@819 -- # '[' -z 76568 ']' 00:12:14.652 10:17:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.652 10:17:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:14.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.652 10:17:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.652 10:17:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:14.652 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:12:14.652 [2024-07-26 10:17:27.669391] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:14.652 [2024-07-26 10:17:27.669462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76568 ] 00:12:14.652 [2024-07-26 10:17:27.800426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.652 [2024-07-26 10:17:27.892305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.218 10:17:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:15.218 10:17:28 -- common/autotest_common.sh@852 -- # return 0 00:12:15.218 10:17:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:15.476 [2024-07-26 10:17:28.885321] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:15.476 [2024-07-26 10:17:28.890555] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:15.476 [2024-07-26 10:17:28.891165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75190 (107): Transport endpoint is not connected 00:12:15.476 [2024-07-26 10:17:28.892150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75190 (9): Bad file descriptor 00:12:15.476 [2024-07-26 10:17:28.893145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:15.476 [2024-07-26 10:17:28.893169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:15.476 [2024-07-26 10:17:28.893196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:15.476 request: 00:12:15.476 { 00:12:15.476 "name": "TLSTEST", 00:12:15.476 "trtype": "tcp", 00:12:15.476 "traddr": "10.0.0.2", 00:12:15.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.476 "adrfam": "ipv4", 00:12:15.476 "trsvcid": "4420", 00:12:15.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.476 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:15.476 "method": "bdev_nvme_attach_controller", 00:12:15.476 "req_id": 1 00:12:15.476 } 00:12:15.476 Got JSON-RPC error response 00:12:15.476 response: 00:12:15.476 { 00:12:15.476 "code": -32602, 00:12:15.476 "message": "Invalid parameters" 00:12:15.476 } 00:12:15.476 10:17:28 -- target/tls.sh@36 -- # killprocess 76568 00:12:15.476 10:17:28 -- common/autotest_common.sh@926 -- # '[' -z 76568 ']' 00:12:15.477 10:17:28 -- common/autotest_common.sh@930 -- # kill -0 76568 00:12:15.477 10:17:28 -- common/autotest_common.sh@931 -- # uname 00:12:15.477 10:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:15.477 10:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76568 00:12:15.734 10:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:15.734 10:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:15.734 killing process with pid 76568 00:12:15.734 10:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76568' 00:12:15.734 10:17:28 -- common/autotest_common.sh@945 -- # kill 76568 00:12:15.734 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.734 00:12:15.734 Latency(us) 00:12:15.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.734 =================================================================================================================== 00:12:15.735 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.735 10:17:28 -- common/autotest_common.sh@950 -- # wait 76568 00:12:15.735 10:17:29 -- target/tls.sh@37 -- # return 1 00:12:15.735 10:17:29 -- common/autotest_common.sh@643 -- # es=1 00:12:15.735 10:17:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:15.735 10:17:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:15.735 10:17:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:15.735 10:17:29 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:15.735 10:17:29 -- common/autotest_common.sh@640 -- # local es=0 00:12:15.735 10:17:29 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:15.735 10:17:29 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:15.735 10:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:15.735 10:17:29 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:15.735 10:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:15.735 10:17:29 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:15.735 10:17:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:15.735 10:17:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:15.735 10:17:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:15.735 10:17:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:15.735 10:17:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.735 10:17:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:15.735 10:17:29 -- target/tls.sh@28 -- # bdevperf_pid=76590 00:12:15.735 10:17:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:15.735 10:17:29 -- target/tls.sh@31 -- # waitforlisten 76590 /var/tmp/bdevperf.sock 00:12:15.735 10:17:29 -- common/autotest_common.sh@819 -- # '[' -z 76590 ']' 00:12:15.735 10:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.735 10:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:15.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.735 10:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.735 10:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:15.735 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.735 [2024-07-26 10:17:29.177952] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:15.735 [2024-07-26 10:17:29.178061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76590 ] 00:12:15.993 [2024-07-26 10:17:29.310648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.993 [2024-07-26 10:17:29.406644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.928 10:17:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:16.928 10:17:30 -- common/autotest_common.sh@852 -- # return 0 00:12:16.928 10:17:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:16.928 [2024-07-26 10:17:30.329551] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:16.928 [2024-07-26 10:17:30.338822] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:16.928 [2024-07-26 10:17:30.338859] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:16.928 [2024-07-26 10:17:30.338919] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:16.928 [2024-07-26 10:17:30.339744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd190 (107): Transport endpoint is not connected 00:12:16.928 [2024-07-26 10:17:30.340730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd190 (9): Bad file descriptor 00:12:16.928 [2024-07-26 10:17:30.341726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:16.928 [2024-07-26 10:17:30.341752] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:16.928 [2024-07-26 10:17:30.341763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:16.928 request: 00:12:16.928 { 00:12:16.928 "name": "TLSTEST", 00:12:16.928 "trtype": "tcp", 00:12:16.928 "traddr": "10.0.0.2", 00:12:16.928 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:16.928 "adrfam": "ipv4", 00:12:16.928 "trsvcid": "4420", 00:12:16.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.928 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:16.928 "method": "bdev_nvme_attach_controller", 00:12:16.928 "req_id": 1 00:12:16.928 } 00:12:16.928 Got JSON-RPC error response 00:12:16.928 response: 00:12:16.928 { 00:12:16.928 "code": -32602, 00:12:16.928 "message": "Invalid parameters" 00:12:16.928 } 00:12:16.928 10:17:30 -- target/tls.sh@36 -- # killprocess 76590 00:12:16.928 10:17:30 -- common/autotest_common.sh@926 -- # '[' -z 76590 ']' 00:12:16.928 10:17:30 -- common/autotest_common.sh@930 -- # kill -0 76590 00:12:16.928 10:17:30 -- common/autotest_common.sh@931 -- # uname 00:12:16.928 10:17:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:16.928 10:17:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76590 00:12:17.194 killing process with pid 76590 00:12:17.194 Received shutdown signal, test time was about 10.000000 seconds 00:12:17.194 00:12:17.194 Latency(us) 00:12:17.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.194 =================================================================================================================== 00:12:17.194 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:17.194 10:17:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:17.194 10:17:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:17.194 10:17:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76590' 00:12:17.194 10:17:30 -- common/autotest_common.sh@945 -- # kill 76590 00:12:17.194 10:17:30 -- common/autotest_common.sh@950 -- # wait 76590 00:12:17.194 10:17:30 -- target/tls.sh@37 -- # return 1 00:12:17.194 10:17:30 -- common/autotest_common.sh@643 -- # es=1 00:12:17.194 10:17:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:17.194 10:17:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:17.194 10:17:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:17.194 10:17:30 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:17.194 10:17:30 -- common/autotest_common.sh@640 -- # local es=0 00:12:17.194 10:17:30 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:17.194 10:17:30 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:17.194 10:17:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:17.194 10:17:30 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:17.194 10:17:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:17.194 10:17:30 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:17.194 10:17:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:17.194 10:17:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:17.194 10:17:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:17.194 10:17:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:17.194 10:17:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:17.194 10:17:30 -- target/tls.sh@28 -- # bdevperf_pid=76622 00:12:17.194 10:17:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:17.194 10:17:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:17.194 10:17:30 -- target/tls.sh@31 -- # waitforlisten 76622 /var/tmp/bdevperf.sock 00:12:17.194 10:17:30 -- common/autotest_common.sh@819 -- # '[' -z 76622 ']' 00:12:17.194 10:17:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.194 10:17:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:17.194 10:17:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.194 10:17:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:17.194 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.534 [2024-07-26 10:17:30.655227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:17.535 [2024-07-26 10:17:30.655353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76622 ] 00:12:17.535 [2024-07-26 10:17:30.791970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.535 [2024-07-26 10:17:30.869474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.471 10:17:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:18.471 10:17:31 -- common/autotest_common.sh@852 -- # return 0 00:12:18.471 10:17:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:18.471 [2024-07-26 10:17:31.857910] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:18.471 [2024-07-26 10:17:31.869149] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:18.471 [2024-07-26 10:17:31.869206] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:18.471 [2024-07-26 10:17:31.869274] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:18.471 [2024-07-26 10:17:31.870051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e190 (107): Transport endpoint is not connected 00:12:18.471 [2024-07-26 10:17:31.871024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e190 (9): Bad file descriptor 00:12:18.471 [2024-07-26 10:17:31.872021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:18.471 [2024-07-26 10:17:31.872056] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:18.471 [2024-07-26 10:17:31.872068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:18.471 request: 00:12:18.471 { 00:12:18.471 "name": "TLSTEST", 00:12:18.471 "trtype": "tcp", 00:12:18.471 "traddr": "10.0.0.2", 00:12:18.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:18.471 "adrfam": "ipv4", 00:12:18.471 "trsvcid": "4420", 00:12:18.471 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:18.471 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:18.471 "method": "bdev_nvme_attach_controller", 00:12:18.471 "req_id": 1 00:12:18.471 } 00:12:18.471 Got JSON-RPC error response 00:12:18.471 response: 00:12:18.471 { 00:12:18.471 "code": -32602, 00:12:18.471 "message": "Invalid parameters" 00:12:18.471 } 00:12:18.471 10:17:31 -- target/tls.sh@36 -- # killprocess 76622 00:12:18.471 10:17:31 -- common/autotest_common.sh@926 -- # '[' -z 76622 ']' 00:12:18.471 10:17:31 -- common/autotest_common.sh@930 -- # kill -0 76622 00:12:18.471 10:17:31 -- common/autotest_common.sh@931 -- # uname 00:12:18.471 10:17:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:18.471 10:17:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76622 00:12:18.471 killing process with pid 76622 00:12:18.471 Received shutdown signal, test time was about 10.000000 seconds 00:12:18.471 00:12:18.471 Latency(us) 00:12:18.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.471 =================================================================================================================== 00:12:18.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.471 10:17:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:18.471 10:17:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:18.471 10:17:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76622' 00:12:18.471 10:17:31 -- common/autotest_common.sh@945 -- # kill 76622 00:12:18.471 10:17:31 -- common/autotest_common.sh@950 -- # wait 76622 00:12:18.730 10:17:32 -- target/tls.sh@37 -- # return 1 00:12:18.730 10:17:32 -- common/autotest_common.sh@643 -- # es=1 00:12:18.730 10:17:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:18.730 10:17:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:18.730 10:17:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:18.730 10:17:32 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:18.730 10:17:32 -- common/autotest_common.sh@640 -- # local es=0 00:12:18.730 10:17:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:18.730 10:17:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:18.730 10:17:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:18.730 10:17:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:18.730 10:17:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:18.730 10:17:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:18.730 10:17:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:18.730 10:17:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:18.730 10:17:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:18.730 10:17:32 -- target/tls.sh@23 -- # psk= 00:12:18.730 10:17:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:18.730 10:17:32 -- target/tls.sh@28 -- # bdevperf_pid=76645 00:12:18.730 10:17:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:18.730 10:17:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:18.730 10:17:32 -- target/tls.sh@31 -- # waitforlisten 76645 /var/tmp/bdevperf.sock 00:12:18.730 10:17:32 -- common/autotest_common.sh@819 -- # '[' -z 76645 ']' 00:12:18.730 10:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:18.730 10:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.730 10:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:18.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:18.730 10:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.730 10:17:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.730 [2024-07-26 10:17:32.174241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:18.730 [2024-07-26 10:17:32.174546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76645 ] 00:12:18.988 [2024-07-26 10:17:32.309932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.988 [2024-07-26 10:17:32.401311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.922 10:17:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:19.922 10:17:33 -- common/autotest_common.sh@852 -- # return 0 00:12:19.922 10:17:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:19.922 [2024-07-26 10:17:33.359272] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:19.922 [2024-07-26 10:17:33.361327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4a20 (9): Bad file descriptor 00:12:19.922 [2024-07-26 10:17:33.362323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:19.922 [2024-07-26 10:17:33.362363] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:19.922 [2024-07-26 10:17:33.362374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:19.922 request: 00:12:19.922 { 00:12:19.922 "name": "TLSTEST", 00:12:19.922 "trtype": "tcp", 00:12:19.922 "traddr": "10.0.0.2", 00:12:19.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:19.922 "adrfam": "ipv4", 00:12:19.922 "trsvcid": "4420", 00:12:19.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.922 "method": "bdev_nvme_attach_controller", 00:12:19.922 "req_id": 1 00:12:19.922 } 00:12:19.922 Got JSON-RPC error response 00:12:19.922 response: 00:12:19.922 { 00:12:19.922 "code": -32602, 00:12:19.922 "message": "Invalid parameters" 00:12:19.922 } 00:12:20.181 10:17:33 -- target/tls.sh@36 -- # killprocess 76645 00:12:20.181 10:17:33 -- common/autotest_common.sh@926 -- # '[' -z 76645 ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@930 -- # kill -0 76645 00:12:20.181 10:17:33 -- common/autotest_common.sh@931 -- # uname 00:12:20.181 10:17:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76645 00:12:20.181 killing process with pid 76645 00:12:20.181 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.181 00:12:20.181 Latency(us) 00:12:20.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.181 =================================================================================================================== 00:12:20.181 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:20.181 10:17:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:20.181 10:17:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76645' 00:12:20.181 10:17:33 -- common/autotest_common.sh@945 -- # kill 76645 00:12:20.181 10:17:33 -- common/autotest_common.sh@950 -- # wait 76645 00:12:20.181 10:17:33 -- target/tls.sh@37 -- # return 1 00:12:20.181 10:17:33 -- common/autotest_common.sh@643 -- # es=1 00:12:20.181 10:17:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:20.181 10:17:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:20.181 10:17:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:20.181 10:17:33 -- target/tls.sh@167 -- # killprocess 76190 00:12:20.181 10:17:33 -- common/autotest_common.sh@926 -- # '[' -z 76190 ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@930 -- # kill -0 76190 00:12:20.181 10:17:33 -- common/autotest_common.sh@931 -- # uname 00:12:20.181 10:17:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76190 00:12:20.181 killing process with pid 76190 00:12:20.181 10:17:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:20.181 10:17:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:20.181 10:17:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76190' 00:12:20.181 10:17:33 -- common/autotest_common.sh@945 -- # kill 76190 00:12:20.181 10:17:33 -- common/autotest_common.sh@950 -- # wait 76190 00:12:20.440 10:17:33 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:20.440 10:17:33 -- target/tls.sh@49 -- # local key hash crc 00:12:20.440 10:17:33 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:20.440 10:17:33 -- target/tls.sh@51 -- # hash=02 00:12:20.440 10:17:33 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:20.440 10:17:33 -- target/tls.sh@52 -- # gzip -1 -c 00:12:20.440 10:17:33 -- target/tls.sh@52 -- # tail -c8 00:12:20.440 10:17:33 -- target/tls.sh@52 -- # head -c 4 00:12:20.440 10:17:33 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:20.440 10:17:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:20.440 10:17:33 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:20.440 10:17:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:20.440 10:17:33 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:20.440 10:17:33 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.440 10:17:33 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:20.440 10:17:33 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.440 10:17:33 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:20.440 10:17:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:20.440 10:17:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:20.440 10:17:33 -- common/autotest_common.sh@10 -- # set +x 00:12:20.440 10:17:33 -- nvmf/common.sh@469 -- # nvmfpid=76692 00:12:20.440 10:17:33 -- nvmf/common.sh@470 -- # waitforlisten 76692 00:12:20.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.440 10:17:33 -- common/autotest_common.sh@819 -- # '[' -z 76692 ']' 00:12:20.440 10:17:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.440 10:17:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:20.440 10:17:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:20.440 10:17:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.440 10:17:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:20.440 10:17:33 -- common/autotest_common.sh@10 -- # set +x 00:12:20.699 [2024-07-26 10:17:33.913277] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:20.699 [2024-07-26 10:17:33.913371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.699 [2024-07-26 10:17:34.049714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.699 [2024-07-26 10:17:34.127038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:20.699 [2024-07-26 10:17:34.127174] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.699 [2024-07-26 10:17:34.127187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.699 [2024-07-26 10:17:34.127195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.699 [2024-07-26 10:17:34.127256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.634 10:17:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:21.634 10:17:34 -- common/autotest_common.sh@852 -- # return 0 00:12:21.634 10:17:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:21.634 10:17:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:21.634 10:17:34 -- common/autotest_common.sh@10 -- # set +x 00:12:21.634 10:17:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.634 10:17:34 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:21.634 10:17:34 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:21.634 10:17:34 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:21.893 [2024-07-26 10:17:35.178490] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.893 10:17:35 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:22.152 10:17:35 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:22.152 [2024-07-26 10:17:35.602593] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:22.152 [2024-07-26 10:17:35.602874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.411 10:17:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:22.671 malloc0 00:12:22.671 10:17:35 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:22.671 10:17:36 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:22.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:22.930 10:17:36 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:22.930 10:17:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:22.930 10:17:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:22.930 10:17:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:22.930 10:17:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:22.930 10:17:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:22.930 10:17:36 -- target/tls.sh@28 -- # bdevperf_pid=76742 00:12:22.930 10:17:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:22.930 10:17:36 -- target/tls.sh@31 -- # waitforlisten 76742 /var/tmp/bdevperf.sock 00:12:22.930 10:17:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:22.930 10:17:36 -- common/autotest_common.sh@819 -- # '[' -z 76742 ']' 00:12:22.930 10:17:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:22.930 10:17:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.930 10:17:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:22.930 10:17:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.930 10:17:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.195 [2024-07-26 10:17:36.417145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:23.195 [2024-07-26 10:17:36.417490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76742 ] 00:12:23.195 [2024-07-26 10:17:36.556841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.195 [2024-07-26 10:17:36.637456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.145 10:17:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:24.145 10:17:37 -- common/autotest_common.sh@852 -- # return 0 00:12:24.145 10:17:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.145 [2024-07-26 10:17:37.573359] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:24.404 TLSTESTn1 00:12:24.404 10:17:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:24.404 Running I/O for 10 seconds... 00:12:34.417 00:12:34.417 Latency(us) 00:12:34.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.417 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:34.417 Verification LBA range: start 0x0 length 0x2000 00:12:34.417 TLSTESTn1 : 10.01 5914.98 23.11 0.00 0.00 21605.85 5510.98 31218.97 00:12:34.417 =================================================================================================================== 00:12:34.417 Total : 5914.98 23.11 0.00 0.00 21605.85 5510.98 31218.97 00:12:34.417 0 00:12:34.417 10:17:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:34.417 10:17:47 -- target/tls.sh@45 -- # killprocess 76742 00:12:34.417 10:17:47 -- common/autotest_common.sh@926 -- # '[' -z 76742 ']' 00:12:34.417 10:17:47 -- common/autotest_common.sh@930 -- # kill -0 76742 00:12:34.417 10:17:47 -- common/autotest_common.sh@931 -- # uname 00:12:34.417 10:17:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.417 10:17:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76742 00:12:34.417 10:17:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:34.417 10:17:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:34.417 10:17:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76742' 00:12:34.417 killing process with pid 76742 00:12:34.417 10:17:47 -- common/autotest_common.sh@945 -- # kill 76742 00:12:34.417 Received shutdown signal, test time was about 10.000000 seconds 00:12:34.417 00:12:34.417 Latency(us) 00:12:34.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.417 =================================================================================================================== 00:12:34.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.417 10:17:47 -- common/autotest_common.sh@950 -- # wait 76742 00:12:34.676 10:17:48 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.676 10:17:48 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.676 10:17:48 -- common/autotest_common.sh@640 -- # local es=0 00:12:34.676 10:17:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.676 10:17:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:34.676 10:17:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:34.676 10:17:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:34.676 10:17:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:34.676 10:17:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.676 10:17:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:34.676 10:17:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:34.676 10:17:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:34.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:34.676 10:17:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:34.676 10:17:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:34.676 10:17:48 -- target/tls.sh@28 -- # bdevperf_pid=76877 00:12:34.676 10:17:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:34.676 10:17:48 -- target/tls.sh@31 -- # waitforlisten 76877 /var/tmp/bdevperf.sock 00:12:34.676 10:17:48 -- common/autotest_common.sh@819 -- # '[' -z 76877 ']' 00:12:34.676 10:17:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:34.676 10:17:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:34.676 10:17:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:34.676 10:17:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:34.676 10:17:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:34.676 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:12:34.676 [2024-07-26 10:17:48.101848] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:34.676 [2024-07-26 10:17:48.101934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76877 ] 00:12:34.934 [2024-07-26 10:17:48.240980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.934 [2024-07-26 10:17:48.323384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.870 10:17:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:35.871 10:17:49 -- common/autotest_common.sh@852 -- # return 0 00:12:35.871 10:17:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:35.871 [2024-07-26 10:17:49.282055] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:35.871 [2024-07-26 10:17:49.282653] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:35.871 request: 00:12:35.871 { 00:12:35.871 "name": "TLSTEST", 00:12:35.871 "trtype": "tcp", 00:12:35.871 "traddr": "10.0.0.2", 00:12:35.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.871 "adrfam": "ipv4", 00:12:35.871 "trsvcid": "4420", 00:12:35.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.871 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:35.871 "method": "bdev_nvme_attach_controller", 00:12:35.871 "req_id": 1 00:12:35.871 } 00:12:35.871 Got JSON-RPC error response 00:12:35.871 response: 00:12:35.871 { 00:12:35.871 "code": -22, 00:12:35.871 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:35.871 } 00:12:35.871 10:17:49 -- target/tls.sh@36 -- # killprocess 76877 00:12:35.871 10:17:49 -- common/autotest_common.sh@926 -- # '[' -z 76877 ']' 00:12:35.871 10:17:49 -- common/autotest_common.sh@930 -- # kill -0 76877 00:12:35.871 10:17:49 -- common/autotest_common.sh@931 -- # uname 00:12:35.871 10:17:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:35.871 10:17:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76877 00:12:35.871 killing process with pid 76877 00:12:35.871 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.871 00:12:35.871 Latency(us) 00:12:35.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.871 =================================================================================================================== 00:12:35.871 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:35.871 10:17:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:35.871 10:17:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:35.871 10:17:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76877' 00:12:35.871 10:17:49 -- common/autotest_common.sh@945 -- # kill 76877 00:12:35.871 10:17:49 -- common/autotest_common.sh@950 -- # wait 76877 00:12:36.129 10:17:49 -- target/tls.sh@37 -- # return 1 00:12:36.129 10:17:49 -- common/autotest_common.sh@643 -- # es=1 00:12:36.129 10:17:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:36.129 10:17:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:36.129 10:17:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:36.129 10:17:49 -- target/tls.sh@183 -- # killprocess 76692 00:12:36.129 10:17:49 -- common/autotest_common.sh@926 -- # '[' -z 76692 ']' 00:12:36.129 10:17:49 -- common/autotest_common.sh@930 -- # kill -0 76692 00:12:36.129 10:17:49 -- common/autotest_common.sh@931 -- # uname 00:12:36.129 10:17:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.129 10:17:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76692 00:12:36.129 killing process with pid 76692 00:12:36.129 10:17:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:36.129 10:17:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:36.129 10:17:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76692' 00:12:36.129 10:17:49 -- common/autotest_common.sh@945 -- # kill 76692 00:12:36.129 10:17:49 -- common/autotest_common.sh@950 -- # wait 76692 00:12:36.388 10:17:49 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:36.388 10:17:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.388 10:17:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:36.388 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:36.388 10:17:49 -- nvmf/common.sh@469 -- # nvmfpid=76915 00:12:36.388 10:17:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:36.388 10:17:49 -- nvmf/common.sh@470 -- # waitforlisten 76915 00:12:36.388 10:17:49 -- common/autotest_common.sh@819 -- # '[' -z 76915 ']' 00:12:36.388 10:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.388 10:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.388 10:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.388 10:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.388 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:36.388 [2024-07-26 10:17:49.832471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:36.388 [2024-07-26 10:17:49.832566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.646 [2024-07-26 10:17:49.973116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.646 [2024-07-26 10:17:50.047597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.646 [2024-07-26 10:17:50.047768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.646 [2024-07-26 10:17:50.047782] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.646 [2024-07-26 10:17:50.047791] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.646 [2024-07-26 10:17:50.047823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.582 10:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.582 10:17:50 -- common/autotest_common.sh@852 -- # return 0 00:12:37.582 10:17:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.582 10:17:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:37.582 10:17:50 -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 10:17:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.582 10:17:50 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.582 10:17:50 -- common/autotest_common.sh@640 -- # local es=0 00:12:37.582 10:17:50 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.582 10:17:50 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:12:37.582 10:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:37.583 10:17:50 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:12:37.583 10:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:37.583 10:17:50 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.583 10:17:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.583 10:17:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:37.583 [2024-07-26 10:17:50.982189] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.583 10:17:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:37.841 10:17:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:38.100 [2024-07-26 10:17:51.454345] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:38.100 [2024-07-26 10:17:51.454628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.100 10:17:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:38.358 malloc0 00:12:38.358 10:17:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.617 10:17:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:38.877 [2024-07-26 10:17:52.133508] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:38.877 [2024-07-26 10:17:52.133559] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:38.877 [2024-07-26 10:17:52.133644] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:38.877 request: 00:12:38.877 { 00:12:38.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.877 "host": "nqn.2016-06.io.spdk:host1", 00:12:38.877 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:38.877 "method": "nvmf_subsystem_add_host", 00:12:38.877 "req_id": 1 00:12:38.877 } 00:12:38.877 Got JSON-RPC error response 00:12:38.877 response: 00:12:38.877 { 00:12:38.877 "code": -32603, 00:12:38.877 "message": "Internal error" 00:12:38.877 } 00:12:38.877 10:17:52 -- common/autotest_common.sh@643 -- # es=1 00:12:38.877 10:17:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:38.877 10:17:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:38.877 10:17:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:38.877 10:17:52 -- target/tls.sh@189 -- # killprocess 76915 00:12:38.877 10:17:52 -- common/autotest_common.sh@926 -- # '[' -z 76915 ']' 00:12:38.877 10:17:52 -- common/autotest_common.sh@930 -- # kill -0 76915 00:12:38.877 10:17:52 -- common/autotest_common.sh@931 -- # uname 00:12:38.877 10:17:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.877 10:17:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76915 00:12:38.877 killing process with pid 76915 00:12:38.877 10:17:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:38.877 10:17:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:38.877 10:17:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76915' 00:12:38.877 10:17:52 -- common/autotest_common.sh@945 -- # kill 76915 00:12:38.877 10:17:52 -- common/autotest_common.sh@950 -- # wait 76915 00:12:39.136 10:17:52 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:39.136 10:17:52 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:39.136 10:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:39.136 10:17:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.136 10:17:52 -- common/autotest_common.sh@10 -- # set +x 00:12:39.136 10:17:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.136 10:17:52 -- nvmf/common.sh@469 -- # nvmfpid=76972 00:12:39.136 10:17:52 -- nvmf/common.sh@470 -- # waitforlisten 76972 00:12:39.136 10:17:52 -- common/autotest_common.sh@819 -- # '[' -z 76972 ']' 00:12:39.136 10:17:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.136 10:17:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:39.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.136 10:17:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.136 10:17:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:39.136 10:17:52 -- common/autotest_common.sh@10 -- # set +x 00:12:39.136 [2024-07-26 10:17:52.471610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:39.136 [2024-07-26 10:17:52.471948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.396 [2024-07-26 10:17:52.611694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.396 [2024-07-26 10:17:52.692519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:39.396 [2024-07-26 10:17:52.692700] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.396 [2024-07-26 10:17:52.692730] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.396 [2024-07-26 10:17:52.692739] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.396 [2024-07-26 10:17:52.692771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.964 10:17:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.964 10:17:53 -- common/autotest_common.sh@852 -- # return 0 00:12:39.964 10:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:39.964 10:17:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.964 10:17:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.964 10:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.964 10:17:53 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:39.964 10:17:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:39.964 10:17:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:40.222 [2024-07-26 10:17:53.622561] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.222 10:17:53 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:40.480 10:17:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:40.739 [2024-07-26 10:17:54.042716] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:40.739 [2024-07-26 10:17:54.042964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.739 10:17:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:40.998 malloc0 00:12:40.998 10:17:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:41.257 10:17:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:41.516 10:17:54 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:41.516 10:17:54 -- target/tls.sh@197 -- # bdevperf_pid=77021 00:12:41.516 10:17:54 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:41.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.516 10:17:54 -- target/tls.sh@200 -- # waitforlisten 77021 /var/tmp/bdevperf.sock 00:12:41.516 10:17:54 -- common/autotest_common.sh@819 -- # '[' -z 77021 ']' 00:12:41.516 10:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.516 10:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:41.516 10:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.516 10:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:41.516 10:17:54 -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 [2024-07-26 10:17:54.790897] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:41.516 [2024-07-26 10:17:54.791158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77021 ] 00:12:41.516 [2024-07-26 10:17:54.928262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.775 [2024-07-26 10:17:55.015426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.342 10:17:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:42.342 10:17:55 -- common/autotest_common.sh@852 -- # return 0 00:12:42.342 10:17:55 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:42.601 [2024-07-26 10:17:55.897946] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:42.601 TLSTESTn1 00:12:42.601 10:17:55 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:42.860 10:17:56 -- target/tls.sh@205 -- # tgtconf='{ 00:12:42.860 "subsystems": [ 00:12:42.860 { 00:12:42.860 "subsystem": "iobuf", 00:12:42.860 "config": [ 00:12:42.860 { 00:12:42.860 "method": "iobuf_set_options", 00:12:42.860 "params": { 00:12:42.860 "small_pool_count": 8192, 00:12:42.860 "large_pool_count": 1024, 00:12:42.860 "small_bufsize": 8192, 00:12:42.860 "large_bufsize": 135168 00:12:42.860 } 00:12:42.860 } 00:12:42.860 ] 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "subsystem": "sock", 00:12:42.860 "config": [ 00:12:42.860 { 00:12:42.860 "method": "sock_impl_set_options", 00:12:42.860 "params": { 00:12:42.860 "impl_name": "uring", 00:12:42.860 "recv_buf_size": 2097152, 00:12:42.860 "send_buf_size": 2097152, 00:12:42.860 "enable_recv_pipe": true, 00:12:42.860 "enable_quickack": false, 00:12:42.860 "enable_placement_id": 0, 00:12:42.860 "enable_zerocopy_send_server": false, 00:12:42.860 "enable_zerocopy_send_client": false, 00:12:42.860 "zerocopy_threshold": 0, 00:12:42.860 "tls_version": 0, 00:12:42.860 "enable_ktls": false 00:12:42.860 } 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "method": "sock_impl_set_options", 00:12:42.860 "params": { 00:12:42.860 "impl_name": "posix", 00:12:42.860 "recv_buf_size": 2097152, 00:12:42.860 "send_buf_size": 2097152, 00:12:42.860 "enable_recv_pipe": true, 00:12:42.860 "enable_quickack": false, 00:12:42.860 "enable_placement_id": 0, 00:12:42.860 "enable_zerocopy_send_server": true, 00:12:42.860 "enable_zerocopy_send_client": false, 00:12:42.860 "zerocopy_threshold": 0, 00:12:42.860 "tls_version": 0, 00:12:42.860 "enable_ktls": false 00:12:42.860 } 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "method": "sock_impl_set_options", 00:12:42.860 "params": { 00:12:42.860 "impl_name": "ssl", 00:12:42.860 "recv_buf_size": 4096, 00:12:42.860 "send_buf_size": 4096, 00:12:42.860 "enable_recv_pipe": true, 00:12:42.860 "enable_quickack": false, 00:12:42.860 "enable_placement_id": 0, 00:12:42.860 "enable_zerocopy_send_server": true, 00:12:42.860 "enable_zerocopy_send_client": false, 00:12:42.860 "zerocopy_threshold": 0, 00:12:42.860 "tls_version": 0, 00:12:42.860 "enable_ktls": false 00:12:42.860 } 00:12:42.860 } 00:12:42.860 ] 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "subsystem": "vmd", 00:12:42.860 "config": [] 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "subsystem": "accel", 00:12:42.860 "config": [ 00:12:42.860 { 00:12:42.860 "method": "accel_set_options", 00:12:42.860 "params": { 00:12:42.860 "small_cache_size": 128, 00:12:42.860 "large_cache_size": 16, 00:12:42.860 "task_count": 2048, 00:12:42.860 "sequence_count": 2048, 00:12:42.860 "buf_count": 2048 00:12:42.860 } 00:12:42.860 } 00:12:42.860 ] 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "subsystem": "bdev", 00:12:42.860 "config": [ 00:12:42.860 { 00:12:42.860 "method": "bdev_set_options", 00:12:42.860 "params": { 00:12:42.860 "bdev_io_pool_size": 65535, 00:12:42.860 "bdev_io_cache_size": 256, 00:12:42.860 "bdev_auto_examine": true, 00:12:42.860 "iobuf_small_cache_size": 128, 00:12:42.860 "iobuf_large_cache_size": 16 00:12:42.860 } 00:12:42.860 }, 00:12:42.860 { 00:12:42.860 "method": "bdev_raid_set_options", 00:12:42.860 "params": { 00:12:42.860 "process_window_size_kb": 1024 00:12:42.860 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "bdev_iscsi_set_options", 00:12:42.861 "params": { 00:12:42.861 "timeout_sec": 30 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "bdev_nvme_set_options", 00:12:42.861 "params": { 00:12:42.861 "action_on_timeout": "none", 00:12:42.861 "timeout_us": 0, 00:12:42.861 "timeout_admin_us": 0, 00:12:42.861 "keep_alive_timeout_ms": 10000, 00:12:42.861 "transport_retry_count": 4, 00:12:42.861 "arbitration_burst": 0, 00:12:42.861 "low_priority_weight": 0, 00:12:42.861 "medium_priority_weight": 0, 00:12:42.861 "high_priority_weight": 0, 00:12:42.861 "nvme_adminq_poll_period_us": 10000, 00:12:42.861 "nvme_ioq_poll_period_us": 0, 00:12:42.861 "io_queue_requests": 0, 00:12:42.861 "delay_cmd_submit": true, 00:12:42.861 "bdev_retry_count": 3, 00:12:42.861 "transport_ack_timeout": 0, 00:12:42.861 "ctrlr_loss_timeout_sec": 0, 00:12:42.861 "reconnect_delay_sec": 0, 00:12:42.861 "fast_io_fail_timeout_sec": 0, 00:12:42.861 "generate_uuids": false, 00:12:42.861 "transport_tos": 0, 00:12:42.861 "io_path_stat": false, 00:12:42.861 "allow_accel_sequence": false 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "bdev_nvme_set_hotplug", 00:12:42.861 "params": { 00:12:42.861 "period_us": 100000, 00:12:42.861 "enable": false 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "bdev_malloc_create", 00:12:42.861 "params": { 00:12:42.861 "name": "malloc0", 00:12:42.861 "num_blocks": 8192, 00:12:42.861 "block_size": 4096, 00:12:42.861 "physical_block_size": 4096, 00:12:42.861 "uuid": "e5a2c311-fb6b-4f29-a2a6-568cd8e048fd", 00:12:42.861 "optimal_io_boundary": 0 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "bdev_wait_for_examine" 00:12:42.861 } 00:12:42.861 ] 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "subsystem": "nbd", 00:12:42.861 "config": [] 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "subsystem": "scheduler", 00:12:42.861 "config": [ 00:12:42.861 { 00:12:42.861 "method": "framework_set_scheduler", 00:12:42.861 "params": { 00:12:42.861 "name": "static" 00:12:42.861 } 00:12:42.861 } 00:12:42.861 ] 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "subsystem": "nvmf", 00:12:42.861 "config": [ 00:12:42.861 { 00:12:42.861 "method": "nvmf_set_config", 00:12:42.861 "params": { 00:12:42.861 "discovery_filter": "match_any", 00:12:42.861 "admin_cmd_passthru": { 00:12:42.861 "identify_ctrlr": false 00:12:42.861 } 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_set_max_subsystems", 00:12:42.861 "params": { 00:12:42.861 "max_subsystems": 1024 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_set_crdt", 00:12:42.861 "params": { 00:12:42.861 "crdt1": 0, 00:12:42.861 "crdt2": 0, 00:12:42.861 "crdt3": 0 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_create_transport", 00:12:42.861 "params": { 00:12:42.861 "trtype": "TCP", 00:12:42.861 "max_queue_depth": 128, 00:12:42.861 "max_io_qpairs_per_ctrlr": 127, 00:12:42.861 "in_capsule_data_size": 4096, 00:12:42.861 "max_io_size": 131072, 00:12:42.861 "io_unit_size": 131072, 00:12:42.861 "max_aq_depth": 128, 00:12:42.861 "num_shared_buffers": 511, 00:12:42.861 "buf_cache_size": 4294967295, 00:12:42.861 "dif_insert_or_strip": false, 00:12:42.861 "zcopy": false, 00:12:42.861 "c2h_success": false, 00:12:42.861 "sock_priority": 0, 00:12:42.861 "abort_timeout_sec": 1 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_create_subsystem", 00:12:42.861 "params": { 00:12:42.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.861 "allow_any_host": false, 00:12:42.861 "serial_number": "SPDK00000000000001", 00:12:42.861 "model_number": "SPDK bdev Controller", 00:12:42.861 "max_namespaces": 10, 00:12:42.861 "min_cntlid": 1, 00:12:42.861 "max_cntlid": 65519, 00:12:42.861 "ana_reporting": false 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_subsystem_add_host", 00:12:42.861 "params": { 00:12:42.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.861 "host": "nqn.2016-06.io.spdk:host1", 00:12:42.861 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_subsystem_add_ns", 00:12:42.861 "params": { 00:12:42.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.861 "namespace": { 00:12:42.861 "nsid": 1, 00:12:42.861 "bdev_name": "malloc0", 00:12:42.861 "nguid": "E5A2C311FB6B4F29A2A6568CD8E048FD", 00:12:42.861 "uuid": "e5a2c311-fb6b-4f29-a2a6-568cd8e048fd" 00:12:42.861 } 00:12:42.861 } 00:12:42.861 }, 00:12:42.861 { 00:12:42.861 "method": "nvmf_subsystem_add_listener", 00:12:42.861 "params": { 00:12:42.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.861 "listen_address": { 00:12:42.861 "trtype": "TCP", 00:12:42.861 "adrfam": "IPv4", 00:12:42.861 "traddr": "10.0.0.2", 00:12:42.861 "trsvcid": "4420" 00:12:42.861 }, 00:12:42.861 "secure_channel": true 00:12:42.861 } 00:12:42.861 } 00:12:42.861 ] 00:12:42.861 } 00:12:42.861 ] 00:12:42.861 }' 00:12:42.861 10:17:56 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:43.120 10:17:56 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:43.120 "subsystems": [ 00:12:43.120 { 00:12:43.120 "subsystem": "iobuf", 00:12:43.120 "config": [ 00:12:43.120 { 00:12:43.120 "method": "iobuf_set_options", 00:12:43.120 "params": { 00:12:43.120 "small_pool_count": 8192, 00:12:43.120 "large_pool_count": 1024, 00:12:43.120 "small_bufsize": 8192, 00:12:43.120 "large_bufsize": 135168 00:12:43.120 } 00:12:43.120 } 00:12:43.120 ] 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "subsystem": "sock", 00:12:43.120 "config": [ 00:12:43.120 { 00:12:43.120 "method": "sock_impl_set_options", 00:12:43.120 "params": { 00:12:43.120 "impl_name": "uring", 00:12:43.120 "recv_buf_size": 2097152, 00:12:43.120 "send_buf_size": 2097152, 00:12:43.120 "enable_recv_pipe": true, 00:12:43.120 "enable_quickack": false, 00:12:43.120 "enable_placement_id": 0, 00:12:43.120 "enable_zerocopy_send_server": false, 00:12:43.120 "enable_zerocopy_send_client": false, 00:12:43.120 "zerocopy_threshold": 0, 00:12:43.120 "tls_version": 0, 00:12:43.120 "enable_ktls": false 00:12:43.120 } 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "method": "sock_impl_set_options", 00:12:43.120 "params": { 00:12:43.120 "impl_name": "posix", 00:12:43.120 "recv_buf_size": 2097152, 00:12:43.120 "send_buf_size": 2097152, 00:12:43.120 "enable_recv_pipe": true, 00:12:43.120 "enable_quickack": false, 00:12:43.120 "enable_placement_id": 0, 00:12:43.120 "enable_zerocopy_send_server": true, 00:12:43.120 "enable_zerocopy_send_client": false, 00:12:43.120 "zerocopy_threshold": 0, 00:12:43.120 "tls_version": 0, 00:12:43.120 "enable_ktls": false 00:12:43.120 } 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "method": "sock_impl_set_options", 00:12:43.120 "params": { 00:12:43.120 "impl_name": "ssl", 00:12:43.120 "recv_buf_size": 4096, 00:12:43.120 "send_buf_size": 4096, 00:12:43.120 "enable_recv_pipe": true, 00:12:43.120 "enable_quickack": false, 00:12:43.120 "enable_placement_id": 0, 00:12:43.120 "enable_zerocopy_send_server": true, 00:12:43.120 "enable_zerocopy_send_client": false, 00:12:43.120 "zerocopy_threshold": 0, 00:12:43.120 "tls_version": 0, 00:12:43.120 "enable_ktls": false 00:12:43.120 } 00:12:43.120 } 00:12:43.120 ] 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "subsystem": "vmd", 00:12:43.120 "config": [] 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "subsystem": "accel", 00:12:43.120 "config": [ 00:12:43.120 { 00:12:43.120 "method": "accel_set_options", 00:12:43.120 "params": { 00:12:43.120 "small_cache_size": 128, 00:12:43.120 "large_cache_size": 16, 00:12:43.120 "task_count": 2048, 00:12:43.120 "sequence_count": 2048, 00:12:43.120 "buf_count": 2048 00:12:43.120 } 00:12:43.120 } 00:12:43.120 ] 00:12:43.120 }, 00:12:43.120 { 00:12:43.120 "subsystem": "bdev", 00:12:43.120 "config": [ 00:12:43.120 { 00:12:43.120 "method": "bdev_set_options", 00:12:43.120 "params": { 00:12:43.121 "bdev_io_pool_size": 65535, 00:12:43.121 "bdev_io_cache_size": 256, 00:12:43.121 "bdev_auto_examine": true, 00:12:43.121 "iobuf_small_cache_size": 128, 00:12:43.121 "iobuf_large_cache_size": 16 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_raid_set_options", 00:12:43.121 "params": { 00:12:43.121 "process_window_size_kb": 1024 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_iscsi_set_options", 00:12:43.121 "params": { 00:12:43.121 "timeout_sec": 30 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_nvme_set_options", 00:12:43.121 "params": { 00:12:43.121 "action_on_timeout": "none", 00:12:43.121 "timeout_us": 0, 00:12:43.121 "timeout_admin_us": 0, 00:12:43.121 "keep_alive_timeout_ms": 10000, 00:12:43.121 "transport_retry_count": 4, 00:12:43.121 "arbitration_burst": 0, 00:12:43.121 "low_priority_weight": 0, 00:12:43.121 "medium_priority_weight": 0, 00:12:43.121 "high_priority_weight": 0, 00:12:43.121 "nvme_adminq_poll_period_us": 10000, 00:12:43.121 "nvme_ioq_poll_period_us": 0, 00:12:43.121 "io_queue_requests": 512, 00:12:43.121 "delay_cmd_submit": true, 00:12:43.121 "bdev_retry_count": 3, 00:12:43.121 "transport_ack_timeout": 0, 00:12:43.121 "ctrlr_loss_timeout_sec": 0, 00:12:43.121 "reconnect_delay_sec": 0, 00:12:43.121 "fast_io_fail_timeout_sec": 0, 00:12:43.121 "generate_uuids": false, 00:12:43.121 "transport_tos": 0, 00:12:43.121 "io_path_stat": false, 00:12:43.121 "allow_accel_sequence": false 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_nvme_attach_controller", 00:12:43.121 "params": { 00:12:43.121 "name": "TLSTEST", 00:12:43.121 "trtype": "TCP", 00:12:43.121 "adrfam": "IPv4", 00:12:43.121 "traddr": "10.0.0.2", 00:12:43.121 "trsvcid": "4420", 00:12:43.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.121 "prchk_reftag": false, 00:12:43.121 "prchk_guard": false, 00:12:43.121 "ctrlr_loss_timeout_sec": 0, 00:12:43.121 "reconnect_delay_sec": 0, 00:12:43.121 "fast_io_fail_timeout_sec": 0, 00:12:43.121 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:43.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.121 "hdgst": false, 00:12:43.121 "ddgst": false 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_nvme_set_hotplug", 00:12:43.121 "params": { 00:12:43.121 "period_us": 100000, 00:12:43.121 "enable": false 00:12:43.121 } 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "method": "bdev_wait_for_examine" 00:12:43.121 } 00:12:43.121 ] 00:12:43.121 }, 00:12:43.121 { 00:12:43.121 "subsystem": "nbd", 00:12:43.121 "config": [] 00:12:43.121 } 00:12:43.121 ] 00:12:43.121 }' 00:12:43.121 10:17:56 -- target/tls.sh@208 -- # killprocess 77021 00:12:43.121 10:17:56 -- common/autotest_common.sh@926 -- # '[' -z 77021 ']' 00:12:43.121 10:17:56 -- common/autotest_common.sh@930 -- # kill -0 77021 00:12:43.121 10:17:56 -- common/autotest_common.sh@931 -- # uname 00:12:43.121 10:17:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.121 10:17:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77021 00:12:43.380 10:17:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:43.380 10:17:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:43.380 killing process with pid 77021 00:12:43.380 10:17:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77021' 00:12:43.380 10:17:56 -- common/autotest_common.sh@945 -- # kill 77021 00:12:43.380 10:17:56 -- common/autotest_common.sh@950 -- # wait 77021 00:12:43.380 Received shutdown signal, test time was about 10.000000 seconds 00:12:43.380 00:12:43.380 Latency(us) 00:12:43.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.380 =================================================================================================================== 00:12:43.380 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.380 10:17:56 -- target/tls.sh@209 -- # killprocess 76972 00:12:43.380 10:17:56 -- common/autotest_common.sh@926 -- # '[' -z 76972 ']' 00:12:43.380 10:17:56 -- common/autotest_common.sh@930 -- # kill -0 76972 00:12:43.380 10:17:56 -- common/autotest_common.sh@931 -- # uname 00:12:43.380 10:17:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.380 10:17:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76972 00:12:43.380 10:17:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:43.380 10:17:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:43.380 killing process with pid 76972 00:12:43.380 10:17:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76972' 00:12:43.380 10:17:56 -- common/autotest_common.sh@945 -- # kill 76972 00:12:43.380 10:17:56 -- common/autotest_common.sh@950 -- # wait 76972 00:12:43.639 10:17:57 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:43.639 10:17:57 -- target/tls.sh@212 -- # echo '{ 00:12:43.639 "subsystems": [ 00:12:43.639 { 00:12:43.639 "subsystem": "iobuf", 00:12:43.639 "config": [ 00:12:43.639 { 00:12:43.639 "method": "iobuf_set_options", 00:12:43.639 "params": { 00:12:43.639 "small_pool_count": 8192, 00:12:43.639 "large_pool_count": 1024, 00:12:43.639 "small_bufsize": 8192, 00:12:43.639 "large_bufsize": 135168 00:12:43.639 } 00:12:43.639 } 00:12:43.639 ] 00:12:43.639 }, 00:12:43.639 { 00:12:43.639 "subsystem": "sock", 00:12:43.639 "config": [ 00:12:43.639 { 00:12:43.639 "method": "sock_impl_set_options", 00:12:43.639 "params": { 00:12:43.639 "impl_name": "uring", 00:12:43.639 "recv_buf_size": 2097152, 00:12:43.639 "send_buf_size": 2097152, 00:12:43.639 "enable_recv_pipe": true, 00:12:43.639 "enable_quickack": false, 00:12:43.639 "enable_placement_id": 0, 00:12:43.639 "enable_zerocopy_send_server": false, 00:12:43.639 "enable_zerocopy_send_client": false, 00:12:43.639 "zerocopy_threshold": 0, 00:12:43.639 "tls_version": 0, 00:12:43.639 "enable_ktls": false 00:12:43.639 } 00:12:43.639 }, 00:12:43.639 { 00:12:43.639 "method": "sock_impl_set_options", 00:12:43.639 "params": { 00:12:43.639 "impl_name": "posix", 00:12:43.639 "recv_buf_size": 2097152, 00:12:43.639 "send_buf_size": 2097152, 00:12:43.639 "enable_recv_pipe": true, 00:12:43.639 "enable_quickack": false, 00:12:43.639 "enable_placement_id": 0, 00:12:43.639 "enable_zerocopy_send_server": true, 00:12:43.639 "enable_zerocopy_send_client": false, 00:12:43.639 "zerocopy_threshold": 0, 00:12:43.639 "tls_version": 0, 00:12:43.639 "enable_ktls": false 00:12:43.639 } 00:12:43.639 }, 00:12:43.639 { 00:12:43.639 "method": "sock_impl_set_options", 00:12:43.639 "params": { 00:12:43.639 "impl_name": "ssl", 00:12:43.639 "recv_buf_size": 4096, 00:12:43.640 "send_buf_size": 4096, 00:12:43.640 "enable_recv_pipe": true, 00:12:43.640 "enable_quickack": false, 00:12:43.640 "enable_placement_id": 0, 00:12:43.640 "enable_zerocopy_send_server": true, 00:12:43.640 "enable_zerocopy_send_client": false, 00:12:43.640 "zerocopy_threshold": 0, 00:12:43.640 "tls_version": 0, 00:12:43.640 "enable_ktls": false 00:12:43.640 } 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "vmd", 00:12:43.640 "config": [] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "accel", 00:12:43.640 "config": [ 00:12:43.640 { 00:12:43.640 "method": "accel_set_options", 00:12:43.640 "params": { 00:12:43.640 "small_cache_size": 128, 00:12:43.640 "large_cache_size": 16, 00:12:43.640 "task_count": 2048, 00:12:43.640 "sequence_count": 2048, 00:12:43.640 "buf_count": 2048 00:12:43.640 } 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "bdev", 00:12:43.640 "config": [ 00:12:43.640 { 00:12:43.640 "method": "bdev_set_options", 00:12:43.640 "params": { 00:12:43.640 "bdev_io_pool_size": 65535, 00:12:43.640 "bdev_io_cache_size": 256, 00:12:43.640 "bdev_auto_examine": true, 00:12:43.640 "iobuf_small_cache_size": 128, 00:12:43.640 "iobuf_large_cache_size": 16 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_raid_set_options", 00:12:43.640 "params": { 00:12:43.640 "process_window_size_kb": 1024 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_iscsi_set_options", 00:12:43.640 "params": { 00:12:43.640 "timeout_sec": 30 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_nvme_set_options", 00:12:43.640 "params": { 00:12:43.640 "action_on_timeout": "none", 00:12:43.640 "timeout_us": 0, 00:12:43.640 "timeout_admin_us": 0, 00:12:43.640 "keep_alive_timeout_ms": 10000, 00:12:43.640 "transport_retry_count": 4, 00:12:43.640 "arbitration_burst": 0, 00:12:43.640 "low_priority_weight": 0, 00:12:43.640 "medium_priority_weight": 0, 00:12:43.640 "high_priority_weight": 0, 00:12:43.640 "nvme_adminq_poll_period_us": 10000, 00:12:43.640 "nvme_ioq_poll_period_us": 0, 00:12:43.640 "io_queue_requests": 0, 00:12:43.640 "delay_cmd_submit": true, 00:12:43.640 "bdev_retry_count": 3, 00:12:43.640 "transport_ack_timeout": 0, 00:12:43.640 "ctrlr_loss_timeout_sec": 0, 00:12:43.640 "reconnect_delay_sec": 0, 00:12:43.640 "fast_io_fail_timeout_sec": 0, 00:12:43.640 "generate_uuids": false, 00:12:43.640 "transport_tos": 0, 00:12:43.640 "io_path_stat": false, 00:12:43.640 "allow_accel_sequence": false 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_nvme_set_hotplug", 00:12:43.640 "params": { 00:12:43.640 "period_us": 100000, 00:12:43.640 "enable": false 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_malloc_create", 00:12:43.640 "params": { 00:12:43.640 "name": "malloc0", 00:12:43.640 "num_blocks": 8192, 00:12:43.640 "block_size": 4096, 00:12:43.640 "physical_block_size": 4096, 00:12:43.640 "uuid": "e5a2c311-fb6b-4f29-a2a6-568cd8e048fd", 00:12:43.640 "optimal_io_boundary": 0 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "bdev_wait_for_examine" 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "nbd", 00:12:43.640 "config": [] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "scheduler", 00:12:43.640 "config": [ 00:12:43.640 { 00:12:43.640 "method": "framework_set_scheduler", 00:12:43.640 "params": { 00:12:43.640 "name": "static" 00:12:43.640 } 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "subsystem": "nvmf", 00:12:43.640 "config": [ 00:12:43.640 { 00:12:43.640 "method": "nvmf_set_config", 00:12:43.640 "params": { 00:12:43.640 "discovery_filter": "match_any", 00:12:43.640 "admin_cmd_passthru": { 00:12:43.640 "identify_ctrlr": false 00:12:43.640 } 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_set_max_subsystems", 00:12:43.640 "params": { 00:12:43.640 "max_subsystems": 1024 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_set_crdt", 00:12:43.640 "params": { 00:12:43.640 "crdt1": 0, 00:12:43.640 "crdt2": 0, 00:12:43.640 "crdt3": 0 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_create_transport", 00:12:43.640 "params": { 00:12:43.640 "trtype": "TCP", 00:12:43.640 "max_queue_depth": 128, 00:12:43.640 "max_io_qpairs_per_ctrlr": 127, 00:12:43.640 "in_capsule_data_size": 4096, 00:12:43.640 "max_io_size": 131072, 00:12:43.640 "io_unit_size": 131072, 00:12:43.640 "max_aq_depth": 128, 00:12:43.640 "num_shared_buffers": 511, 00:12:43.640 "buf_cache_size": 4294967295, 00:12:43.640 "dif_insert_or_strip": false, 00:12:43.640 "zcopy": false, 00:12:43.640 "c2h_success": false, 00:12:43.640 "sock_priority": 0, 00:12:43.640 "abort_timeout_sec": 1 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_create_subsystem", 00:12:43.640 "params": { 00:12:43.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.640 "allow_any_host": false, 00:12:43.640 "serial_number": "SPDK00000000000001", 00:12:43.640 "model_number": "SPDK bdev Controller", 00:12:43.640 "max_namespaces": 10, 00:12:43.640 "min_cntlid": 1, 00:12:43.640 "max_cntlid": 65519, 00:12:43.640 "ana_reporting": false 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_subsystem_add_host", 00:12:43.640 "params": { 00:12:43.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.640 "host": "nqn.2016-06.io.spdk:host1", 00:12:43.640 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_subsystem_add_ns", 00:12:43.640 "params": { 00:12:43.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.640 "namespace": { 00:12:43.640 "nsid": 1, 00:12:43.640 "bdev_name": "malloc0", 00:12:43.640 "nguid": "E5A2C311FB6B4F29A2A6568CD8E048FD", 00:12:43.640 "uuid": "e5a2c311-fb6b-4f29-a2a6-568cd8e048fd" 00:12:43.640 } 00:12:43.640 } 00:12:43.640 }, 00:12:43.640 { 00:12:43.640 "method": "nvmf_subsystem_add_listener", 00:12:43.640 "params": { 00:12:43.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.640 "listen_address": { 00:12:43.640 "trtype": "TCP", 00:12:43.640 "adrfam": "IPv4", 00:12:43.640 "traddr": "10.0.0.2", 00:12:43.640 "trsvcid": "4420" 00:12:43.640 }, 00:12:43.640 "secure_channel": true 00:12:43.640 } 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 } 00:12:43.640 ] 00:12:43.640 }' 00:12:43.640 10:17:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:43.640 10:17:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:43.640 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.640 10:17:57 -- nvmf/common.sh@469 -- # nvmfpid=77070 00:12:43.640 10:17:57 -- nvmf/common.sh@470 -- # waitforlisten 77070 00:12:43.640 10:17:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:43.640 10:17:57 -- common/autotest_common.sh@819 -- # '[' -z 77070 ']' 00:12:43.640 10:17:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.640 10:17:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.641 10:17:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.641 10:17:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:43.641 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.641 [2024-07-26 10:17:57.091560] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:43.641 [2024-07-26 10:17:57.091687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.901 [2024-07-26 10:17:57.233110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.901 [2024-07-26 10:17:57.306809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:43.901 [2024-07-26 10:17:57.306961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.901 [2024-07-26 10:17:57.306974] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.901 [2024-07-26 10:17:57.306981] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.901 [2024-07-26 10:17:57.307005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.159 [2024-07-26 10:17:57.530832] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.159 [2024-07-26 10:17:57.562796] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:44.159 [2024-07-26 10:17:57.563002] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.726 10:17:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:44.726 10:17:57 -- common/autotest_common.sh@852 -- # return 0 00:12:44.726 10:17:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:44.726 10:17:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:44.726 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:12:44.726 10:17:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.726 10:17:58 -- target/tls.sh@216 -- # bdevperf_pid=77102 00:12:44.726 10:17:58 -- target/tls.sh@217 -- # waitforlisten 77102 /var/tmp/bdevperf.sock 00:12:44.726 10:17:58 -- common/autotest_common.sh@819 -- # '[' -z 77102 ']' 00:12:44.726 10:17:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.726 10:17:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:44.726 10:17:58 -- target/tls.sh@213 -- # echo '{ 00:12:44.726 "subsystems": [ 00:12:44.726 { 00:12:44.726 "subsystem": "iobuf", 00:12:44.726 "config": [ 00:12:44.726 { 00:12:44.726 "method": "iobuf_set_options", 00:12:44.726 "params": { 00:12:44.726 "small_pool_count": 8192, 00:12:44.726 "large_pool_count": 1024, 00:12:44.726 "small_bufsize": 8192, 00:12:44.726 "large_bufsize": 135168 00:12:44.726 } 00:12:44.726 } 00:12:44.726 ] 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "subsystem": "sock", 00:12:44.726 "config": [ 00:12:44.726 { 00:12:44.726 "method": "sock_impl_set_options", 00:12:44.726 "params": { 00:12:44.726 "impl_name": "uring", 00:12:44.726 "recv_buf_size": 2097152, 00:12:44.726 "send_buf_size": 2097152, 00:12:44.726 "enable_recv_pipe": true, 00:12:44.726 "enable_quickack": false, 00:12:44.726 "enable_placement_id": 0, 00:12:44.726 "enable_zerocopy_send_server": false, 00:12:44.726 "enable_zerocopy_send_client": false, 00:12:44.726 "zerocopy_threshold": 0, 00:12:44.726 "tls_version": 0, 00:12:44.726 "enable_ktls": false 00:12:44.726 } 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "method": "sock_impl_set_options", 00:12:44.726 "params": { 00:12:44.726 "impl_name": "posix", 00:12:44.726 "recv_buf_size": 2097152, 00:12:44.726 "send_buf_size": 2097152, 00:12:44.726 "enable_recv_pipe": true, 00:12:44.726 "enable_quickack": false, 00:12:44.726 "enable_placement_id": 0, 00:12:44.726 "enable_zerocopy_send_server": true, 00:12:44.726 "enable_zerocopy_send_client": false, 00:12:44.726 "zerocopy_threshold": 0, 00:12:44.726 "tls_version": 0, 00:12:44.726 "enable_ktls": false 00:12:44.726 } 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "method": "sock_impl_set_options", 00:12:44.726 "params": { 00:12:44.726 "impl_name": "ssl", 00:12:44.726 "recv_buf_size": 4096, 00:12:44.726 "send_buf_size": 4096, 00:12:44.726 "enable_recv_pipe": true, 00:12:44.726 "enable_quickack": false, 00:12:44.726 "enable_placement_id": 0, 00:12:44.726 "enable_zerocopy_send_server": true, 00:12:44.726 "enable_zerocopy_send_client": false, 00:12:44.726 "zerocopy_threshold": 0, 00:12:44.726 "tls_version": 0, 00:12:44.726 "enable_ktls": false 00:12:44.726 } 00:12:44.726 } 00:12:44.726 ] 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "subsystem": "vmd", 00:12:44.726 "config": [] 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "subsystem": "accel", 00:12:44.726 "config": [ 00:12:44.726 { 00:12:44.726 "method": "accel_set_options", 00:12:44.726 "params": { 00:12:44.726 "small_cache_size": 128, 00:12:44.726 "large_cache_size": 16, 00:12:44.726 "task_count": 2048, 00:12:44.726 "sequence_count": 2048, 00:12:44.726 "buf_count": 2048 00:12:44.726 } 00:12:44.726 } 00:12:44.726 ] 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "subsystem": "bdev", 00:12:44.726 "config": [ 00:12:44.726 { 00:12:44.726 "method": "bdev_set_options", 00:12:44.726 "params": { 00:12:44.726 "bdev_io_pool_size": 65535, 00:12:44.726 "bdev_io_cache_size": 256, 00:12:44.726 "bdev_auto_examine": true, 00:12:44.726 "iobuf_small_cache_size": 128, 00:12:44.726 "iobuf_large_cache_size": 16 00:12:44.726 } 00:12:44.726 }, 00:12:44.726 { 00:12:44.726 "method": "bdev_raid_set_options", 00:12:44.726 "params": { 00:12:44.726 "process_window_size_kb": 1024 00:12:44.726 } 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "method": "bdev_iscsi_set_options", 00:12:44.727 "params": { 00:12:44.727 "timeout_sec": 30 00:12:44.727 } 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "method": "bdev_nvme_set_options", 00:12:44.727 "params": { 00:12:44.727 "action_on_timeout": "none", 00:12:44.727 "timeout_us": 0, 00:12:44.727 "timeout_admin_us": 0, 00:12:44.727 "keep_alive_timeout_ms": 10000, 00:12:44.727 "transport_retry_count": 4, 00:12:44.727 "arbitration_burst": 0, 00:12:44.727 "low_priority_weight": 0, 00:12:44.727 "medium_priority_weight": 0, 00:12:44.727 "high_priority_weight": 0, 00:12:44.727 "nvme_adminq_poll_period_us": 10000, 00:12:44.727 "nvme_ioq_poll_period_us": 0, 00:12:44.727 "io_queue_requests": 512, 00:12:44.727 "delay_cmd_submit": true, 00:12:44.727 "bdev_retry_count": 3, 00:12:44.727 "transport_ack_timeout": 0, 00:12:44.727 "ctrlr_loss_timeout_sec": 0, 00:12:44.727 "reconnect_delay_sec": 0, 00:12:44.727 "fast_io_fail_timeout_sec": 0, 00:12:44.727 "generate_uuids": false, 00:12:44.727 "transport_tos": 0, 00:12:44.727 "io_path_stat": false, 00:12:44.727 "allow_accel_sequence": false 00:12:44.727 } 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "method": "bdev_nvme_attach_controller", 00:12:44.727 "params": { 00:12:44.727 "name": "TLSTEST", 00:12:44.727 "trtype": "TCP", 00:12:44.727 "adrfam": "IPv4", 00:12:44.727 "traddr": "10.0.0.2", 00:12:44.727 "trsvcid": "4420", 00:12:44.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.727 "prchk_reftag": false, 00:12:44.727 "prchk_guard": false, 00:12:44.727 "ctrlr_loss_timeout_sec": 0, 00:12:44.727 "reconnect_delay_sec": 0, 00:12:44.727 "fast_io_fail_timeout_sec": 0, 00:12:44.727 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:44.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:44.727 "hdgst": false, 00:12:44.727 "ddgst": false 00:12:44.727 } 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "method": "bdev_nvme_set_hotplug", 00:12:44.727 "params": { 00:12:44.727 "period_us": 100000, 00:12:44.727 "enable": false 00:12:44.727 } 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "method": "bdev_wait_for_examine" 00:12:44.727 } 00:12:44.727 ] 00:12:44.727 }, 00:12:44.727 { 00:12:44.727 "subsystem": "nbd", 00:12:44.727 "config": [] 00:12:44.727 } 00:12:44.727 ] 00:12:44.727 }' 00:12:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.727 10:17:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.727 10:17:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:44.727 10:17:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.727 10:17:58 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:44.727 [2024-07-26 10:17:58.078437] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:44.727 [2024-07-26 10:17:58.079328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77102 ] 00:12:44.986 [2024-07-26 10:17:58.210935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.986 [2024-07-26 10:17:58.308075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.245 [2024-07-26 10:17:58.477701] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:45.812 10:17:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:45.812 10:17:59 -- common/autotest_common.sh@852 -- # return 0 00:12:45.812 10:17:59 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:45.812 Running I/O for 10 seconds... 00:12:55.833 00:12:55.833 Latency(us) 00:12:55.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:55.833 Verification LBA range: start 0x0 length 0x2000 00:12:55.833 TLSTESTn1 : 10.01 6006.58 23.46 0.00 0.00 21276.83 5213.09 21209.83 00:12:55.833 =================================================================================================================== 00:12:55.833 Total : 6006.58 23.46 0.00 0.00 21276.83 5213.09 21209.83 00:12:55.833 0 00:12:55.833 10:18:09 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:55.833 10:18:09 -- target/tls.sh@223 -- # killprocess 77102 00:12:55.833 10:18:09 -- common/autotest_common.sh@926 -- # '[' -z 77102 ']' 00:12:55.833 10:18:09 -- common/autotest_common.sh@930 -- # kill -0 77102 00:12:55.833 10:18:09 -- common/autotest_common.sh@931 -- # uname 00:12:55.833 10:18:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.833 10:18:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77102 00:12:55.833 10:18:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:55.833 10:18:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:55.833 killing process with pid 77102 00:12:55.833 10:18:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77102' 00:12:55.833 10:18:09 -- common/autotest_common.sh@945 -- # kill 77102 00:12:55.833 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.833 00:12:55.833 Latency(us) 00:12:55.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.833 =================================================================================================================== 00:12:55.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.833 10:18:09 -- common/autotest_common.sh@950 -- # wait 77102 00:12:56.091 10:18:09 -- target/tls.sh@224 -- # killprocess 77070 00:12:56.091 10:18:09 -- common/autotest_common.sh@926 -- # '[' -z 77070 ']' 00:12:56.092 10:18:09 -- common/autotest_common.sh@930 -- # kill -0 77070 00:12:56.092 10:18:09 -- common/autotest_common.sh@931 -- # uname 00:12:56.092 10:18:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.092 10:18:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77070 00:12:56.092 10:18:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:56.092 10:18:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:56.092 killing process with pid 77070 00:12:56.092 10:18:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77070' 00:12:56.092 10:18:09 -- common/autotest_common.sh@945 -- # kill 77070 00:12:56.092 10:18:09 -- common/autotest_common.sh@950 -- # wait 77070 00:12:56.351 10:18:09 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:56.351 10:18:09 -- target/tls.sh@227 -- # cleanup 00:12:56.351 10:18:09 -- target/tls.sh@15 -- # process_shm --id 0 00:12:56.351 10:18:09 -- common/autotest_common.sh@796 -- # type=--id 00:12:56.351 10:18:09 -- common/autotest_common.sh@797 -- # id=0 00:12:56.351 10:18:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:12:56.351 10:18:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:56.351 10:18:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:12:56.351 10:18:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:12:56.351 10:18:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:12:56.351 10:18:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:56.351 nvmf_trace.0 00:12:56.351 10:18:09 -- common/autotest_common.sh@811 -- # return 0 00:12:56.351 10:18:09 -- target/tls.sh@16 -- # killprocess 77102 00:12:56.351 10:18:09 -- common/autotest_common.sh@926 -- # '[' -z 77102 ']' 00:12:56.351 10:18:09 -- common/autotest_common.sh@930 -- # kill -0 77102 00:12:56.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77102) - No such process 00:12:56.351 Process with pid 77102 is not found 00:12:56.351 10:18:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77102 is not found' 00:12:56.351 10:18:09 -- target/tls.sh@17 -- # nvmftestfini 00:12:56.351 10:18:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.351 10:18:09 -- nvmf/common.sh@116 -- # sync 00:12:56.351 10:18:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.351 10:18:09 -- nvmf/common.sh@119 -- # set +e 00:12:56.351 10:18:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.351 10:18:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.351 rmmod nvme_tcp 00:12:56.351 rmmod nvme_fabrics 00:12:56.351 rmmod nvme_keyring 00:12:56.351 10:18:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.351 10:18:09 -- nvmf/common.sh@123 -- # set -e 00:12:56.351 10:18:09 -- nvmf/common.sh@124 -- # return 0 00:12:56.351 10:18:09 -- nvmf/common.sh@477 -- # '[' -n 77070 ']' 00:12:56.351 10:18:09 -- nvmf/common.sh@478 -- # killprocess 77070 00:12:56.351 10:18:09 -- common/autotest_common.sh@926 -- # '[' -z 77070 ']' 00:12:56.351 10:18:09 -- common/autotest_common.sh@930 -- # kill -0 77070 00:12:56.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77070) - No such process 00:12:56.351 Process with pid 77070 is not found 00:12:56.351 10:18:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77070 is not found' 00:12:56.351 10:18:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.351 10:18:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.351 10:18:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.351 10:18:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.351 10:18:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.351 10:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.351 10:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.351 10:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.351 10:18:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:56.351 10:18:09 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:56.351 00:12:56.351 real 1m10.147s 00:12:56.351 user 1m47.610s 00:12:56.351 sys 0m24.457s 00:12:56.351 10:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.351 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.351 ************************************ 00:12:56.351 END TEST nvmf_tls 00:12:56.351 ************************************ 00:12:56.611 10:18:09 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:56.611 10:18:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:56.611 10:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.611 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 ************************************ 00:12:56.611 START TEST nvmf_fips 00:12:56.611 ************************************ 00:12:56.611 10:18:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:56.611 * Looking for test storage... 00:12:56.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:56.611 10:18:09 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.611 10:18:09 -- nvmf/common.sh@7 -- # uname -s 00:12:56.611 10:18:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.611 10:18:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.611 10:18:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.611 10:18:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.611 10:18:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.611 10:18:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.611 10:18:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.611 10:18:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.611 10:18:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.611 10:18:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.611 10:18:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:12:56.611 10:18:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:12:56.611 10:18:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.611 10:18:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.611 10:18:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.611 10:18:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.611 10:18:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.611 10:18:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.611 10:18:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.611 10:18:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.611 10:18:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.611 10:18:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.611 10:18:09 -- paths/export.sh@5 -- # export PATH 00:12:56.611 10:18:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.611 10:18:09 -- nvmf/common.sh@46 -- # : 0 00:12:56.611 10:18:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.611 10:18:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.611 10:18:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.611 10:18:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.611 10:18:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.611 10:18:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.611 10:18:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.611 10:18:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.611 10:18:09 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.611 10:18:09 -- fips/fips.sh@89 -- # check_openssl_version 00:12:56.611 10:18:09 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:56.611 10:18:09 -- fips/fips.sh@85 -- # openssl version 00:12:56.611 10:18:09 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:56.611 10:18:09 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:12:56.611 10:18:09 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:12:56.611 10:18:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:56.611 10:18:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:56.611 10:18:09 -- scripts/common.sh@335 -- # IFS=.-: 00:12:56.611 10:18:09 -- scripts/common.sh@335 -- # read -ra ver1 00:12:56.611 10:18:09 -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.611 10:18:09 -- scripts/common.sh@336 -- # read -ra ver2 00:12:56.611 10:18:09 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:56.611 10:18:09 -- scripts/common.sh@339 -- # ver1_l=3 00:12:56.611 10:18:09 -- scripts/common.sh@340 -- # ver2_l=3 00:12:56.611 10:18:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:56.611 10:18:09 -- scripts/common.sh@343 -- # case "$op" in 00:12:56.611 10:18:09 -- scripts/common.sh@347 -- # : 1 00:12:56.611 10:18:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:56.611 10:18:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.611 10:18:09 -- scripts/common.sh@364 -- # decimal 3 00:12:56.611 10:18:09 -- scripts/common.sh@352 -- # local d=3 00:12:56.611 10:18:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:56.611 10:18:09 -- scripts/common.sh@354 -- # echo 3 00:12:56.611 10:18:09 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:56.611 10:18:09 -- scripts/common.sh@365 -- # decimal 3 00:12:56.611 10:18:09 -- scripts/common.sh@352 -- # local d=3 00:12:56.611 10:18:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:56.611 10:18:09 -- scripts/common.sh@354 -- # echo 3 00:12:56.611 10:18:09 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:56.611 10:18:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:56.611 10:18:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:56.611 10:18:09 -- scripts/common.sh@363 -- # (( v++ )) 00:12:56.611 10:18:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.611 10:18:09 -- scripts/common.sh@364 -- # decimal 0 00:12:56.611 10:18:09 -- scripts/common.sh@352 -- # local d=0 00:12:56.611 10:18:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:56.611 10:18:09 -- scripts/common.sh@354 -- # echo 0 00:12:56.611 10:18:09 -- scripts/common.sh@364 -- # ver1[v]=0 00:12:56.611 10:18:09 -- scripts/common.sh@365 -- # decimal 0 00:12:56.611 10:18:09 -- scripts/common.sh@352 -- # local d=0 00:12:56.611 10:18:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:56.611 10:18:09 -- scripts/common.sh@354 -- # echo 0 00:12:56.611 10:18:09 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:56.612 10:18:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:56.612 10:18:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:56.612 10:18:09 -- scripts/common.sh@363 -- # (( v++ )) 00:12:56.612 10:18:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.612 10:18:09 -- scripts/common.sh@364 -- # decimal 9 00:12:56.612 10:18:09 -- scripts/common.sh@352 -- # local d=9 00:12:56.612 10:18:09 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:12:56.612 10:18:09 -- scripts/common.sh@354 -- # echo 9 00:12:56.612 10:18:09 -- scripts/common.sh@364 -- # ver1[v]=9 00:12:56.612 10:18:09 -- scripts/common.sh@365 -- # decimal 0 00:12:56.612 10:18:09 -- scripts/common.sh@352 -- # local d=0 00:12:56.612 10:18:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:56.612 10:18:09 -- scripts/common.sh@354 -- # echo 0 00:12:56.612 10:18:09 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:56.612 10:18:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:56.612 10:18:09 -- scripts/common.sh@366 -- # return 0 00:12:56.612 10:18:09 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:56.612 10:18:09 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:56.612 10:18:10 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:56.612 10:18:10 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:56.612 10:18:10 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:56.612 10:18:10 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:56.612 10:18:10 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:56.612 10:18:10 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:56.612 10:18:10 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:56.612 10:18:10 -- fips/fips.sh@114 -- # build_openssl_config 00:12:56.612 10:18:10 -- fips/fips.sh@37 -- # cat 00:12:56.612 10:18:10 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:56.612 10:18:10 -- fips/fips.sh@58 -- # cat - 00:12:56.612 10:18:10 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:56.612 10:18:10 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:12:56.612 10:18:10 -- fips/fips.sh@117 -- # mapfile -t providers 00:12:56.612 10:18:10 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:12:56.612 10:18:10 -- fips/fips.sh@117 -- # openssl list -providers 00:12:56.612 10:18:10 -- fips/fips.sh@117 -- # grep name 00:12:56.891 10:18:10 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:12:56.891 10:18:10 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:12:56.891 10:18:10 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:56.891 10:18:10 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:12:56.891 10:18:10 -- common/autotest_common.sh@640 -- # local es=0 00:12:56.891 10:18:10 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:56.891 10:18:10 -- common/autotest_common.sh@628 -- # local arg=openssl 00:12:56.891 10:18:10 -- fips/fips.sh@128 -- # : 00:12:56.891 10:18:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.891 10:18:10 -- common/autotest_common.sh@632 -- # type -t openssl 00:12:56.891 10:18:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.891 10:18:10 -- common/autotest_common.sh@634 -- # type -P openssl 00:12:56.891 10:18:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.891 10:18:10 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:12:56.891 10:18:10 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:12:56.891 10:18:10 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:12:56.892 Error setting digest 00:12:56.892 00D2C391A97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:12:56.892 00D2C391A97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:12:56.892 10:18:10 -- common/autotest_common.sh@643 -- # es=1 00:12:56.892 10:18:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:56.892 10:18:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:56.892 10:18:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:56.892 10:18:10 -- fips/fips.sh@131 -- # nvmftestinit 00:12:56.892 10:18:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.892 10:18:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.892 10:18:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.892 10:18:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.892 10:18:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.892 10:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.892 10:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.892 10:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.892 10:18:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:56.892 10:18:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:56.892 10:18:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:56.892 10:18:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:56.892 10:18:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:56.892 10:18:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:56.892 10:18:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.892 10:18:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.892 10:18:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:56.892 10:18:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:56.892 10:18:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.892 10:18:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.892 10:18:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.892 10:18:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.892 10:18:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.892 10:18:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.892 10:18:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.892 10:18:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.892 10:18:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:56.892 10:18:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:56.892 Cannot find device "nvmf_tgt_br" 00:12:56.892 10:18:10 -- nvmf/common.sh@154 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.892 Cannot find device "nvmf_tgt_br2" 00:12:56.892 10:18:10 -- nvmf/common.sh@155 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:56.892 10:18:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:56.892 Cannot find device "nvmf_tgt_br" 00:12:56.892 10:18:10 -- nvmf/common.sh@157 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:56.892 Cannot find device "nvmf_tgt_br2" 00:12:56.892 10:18:10 -- nvmf/common.sh@158 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:56.892 10:18:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:56.892 10:18:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.892 10:18:10 -- nvmf/common.sh@161 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.892 10:18:10 -- nvmf/common.sh@162 -- # true 00:12:56.892 10:18:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.892 10:18:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.892 10:18:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.892 10:18:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.892 10:18:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.892 10:18:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.892 10:18:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.892 10:18:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.892 10:18:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.892 10:18:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:57.166 10:18:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:57.166 10:18:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:57.166 10:18:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:57.166 10:18:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:57.166 10:18:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:57.166 10:18:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:57.166 10:18:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:57.166 10:18:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:57.166 10:18:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:57.166 10:18:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:57.166 10:18:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:57.166 10:18:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:57.166 10:18:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:57.166 10:18:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:57.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:57.166 00:12:57.166 --- 10.0.0.2 ping statistics --- 00:12:57.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.166 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:57.166 10:18:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:57.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:57.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:57.166 00:12:57.166 --- 10.0.0.3 ping statistics --- 00:12:57.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.166 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:57.166 10:18:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:57.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:57.166 00:12:57.166 --- 10.0.0.1 ping statistics --- 00:12:57.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.166 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:57.166 10:18:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.166 10:18:10 -- nvmf/common.sh@421 -- # return 0 00:12:57.166 10:18:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:57.166 10:18:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.166 10:18:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:57.166 10:18:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:57.166 10:18:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.166 10:18:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:57.166 10:18:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:57.166 10:18:10 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:12:57.166 10:18:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:57.166 10:18:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:57.166 10:18:10 -- common/autotest_common.sh@10 -- # set +x 00:12:57.166 10:18:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:57.166 10:18:10 -- nvmf/common.sh@469 -- # nvmfpid=77458 00:12:57.166 10:18:10 -- nvmf/common.sh@470 -- # waitforlisten 77458 00:12:57.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.166 10:18:10 -- common/autotest_common.sh@819 -- # '[' -z 77458 ']' 00:12:57.166 10:18:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.166 10:18:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:57.166 10:18:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.166 10:18:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:57.166 10:18:10 -- common/autotest_common.sh@10 -- # set +x 00:12:57.166 [2024-07-26 10:18:10.556430] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:57.166 [2024-07-26 10:18:10.556551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.424 [2024-07-26 10:18:10.697597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.424 [2024-07-26 10:18:10.781076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.424 [2024-07-26 10:18:10.781241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.424 [2024-07-26 10:18:10.781255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.424 [2024-07-26 10:18:10.781264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.424 [2024-07-26 10:18:10.781302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.360 10:18:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:58.360 10:18:11 -- common/autotest_common.sh@852 -- # return 0 00:12:58.360 10:18:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:58.360 10:18:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:58.360 10:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.360 10:18:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.360 10:18:11 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:12:58.360 10:18:11 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:58.360 10:18:11 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:58.360 10:18:11 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:58.360 10:18:11 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:58.360 10:18:11 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:58.360 10:18:11 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:58.360 10:18:11 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.360 [2024-07-26 10:18:11.799330] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.360 [2024-07-26 10:18:11.815303] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:58.360 [2024-07-26 10:18:11.815528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.620 malloc0 00:12:58.620 10:18:11 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.620 10:18:11 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:58.620 10:18:11 -- fips/fips.sh@148 -- # bdevperf_pid=77493 00:12:58.620 10:18:11 -- fips/fips.sh@149 -- # waitforlisten 77493 /var/tmp/bdevperf.sock 00:12:58.620 10:18:11 -- common/autotest_common.sh@819 -- # '[' -z 77493 ']' 00:12:58.620 10:18:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:58.620 10:18:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:58.620 10:18:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:58.620 10:18:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:58.620 10:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.620 [2024-07-26 10:18:11.927191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:58.620 [2024-07-26 10:18:11.927309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77493 ] 00:12:58.620 [2024-07-26 10:18:12.061972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.878 [2024-07-26 10:18:12.162391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.447 10:18:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:59.447 10:18:12 -- common/autotest_common.sh@852 -- # return 0 00:12:59.447 10:18:12 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:59.706 [2024-07-26 10:18:13.094221] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:59.971 TLSTESTn1 00:12:59.971 10:18:13 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:59.971 Running I/O for 10 seconds... 00:13:09.952 00:13:09.952 Latency(us) 00:13:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:09.952 Verification LBA range: start 0x0 length 0x2000 00:13:09.952 TLSTESTn1 : 10.02 5450.23 21.29 0.00 0.00 23441.70 5093.93 27763.43 00:13:09.952 =================================================================================================================== 00:13:09.952 Total : 5450.23 21.29 0.00 0.00 23441.70 5093.93 27763.43 00:13:09.952 0 00:13:09.952 10:18:23 -- fips/fips.sh@1 -- # cleanup 00:13:09.952 10:18:23 -- fips/fips.sh@15 -- # process_shm --id 0 00:13:09.952 10:18:23 -- common/autotest_common.sh@796 -- # type=--id 00:13:09.952 10:18:23 -- common/autotest_common.sh@797 -- # id=0 00:13:09.952 10:18:23 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:13:09.952 10:18:23 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:09.952 10:18:23 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:13:09.952 10:18:23 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:13:09.952 10:18:23 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:13:09.952 10:18:23 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:09.952 nvmf_trace.0 00:13:10.210 10:18:23 -- common/autotest_common.sh@811 -- # return 0 00:13:10.210 10:18:23 -- fips/fips.sh@16 -- # killprocess 77493 00:13:10.210 10:18:23 -- common/autotest_common.sh@926 -- # '[' -z 77493 ']' 00:13:10.210 10:18:23 -- common/autotest_common.sh@930 -- # kill -0 77493 00:13:10.210 10:18:23 -- common/autotest_common.sh@931 -- # uname 00:13:10.210 10:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.210 10:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77493 00:13:10.210 10:18:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:13:10.210 10:18:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:13:10.210 killing process with pid 77493 00:13:10.210 10:18:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77493' 00:13:10.210 10:18:23 -- common/autotest_common.sh@945 -- # kill 77493 00:13:10.210 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.210 00:13:10.210 Latency(us) 00:13:10.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.210 =================================================================================================================== 00:13:10.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:10.210 10:18:23 -- common/autotest_common.sh@950 -- # wait 77493 00:13:10.210 10:18:23 -- fips/fips.sh@17 -- # nvmftestfini 00:13:10.210 10:18:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:10.210 10:18:23 -- nvmf/common.sh@116 -- # sync 00:13:10.469 10:18:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:10.469 10:18:23 -- nvmf/common.sh@119 -- # set +e 00:13:10.469 10:18:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:10.469 10:18:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:10.469 rmmod nvme_tcp 00:13:10.469 rmmod nvme_fabrics 00:13:10.469 rmmod nvme_keyring 00:13:10.469 10:18:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:10.469 10:18:23 -- nvmf/common.sh@123 -- # set -e 00:13:10.469 10:18:23 -- nvmf/common.sh@124 -- # return 0 00:13:10.469 10:18:23 -- nvmf/common.sh@477 -- # '[' -n 77458 ']' 00:13:10.469 10:18:23 -- nvmf/common.sh@478 -- # killprocess 77458 00:13:10.469 10:18:23 -- common/autotest_common.sh@926 -- # '[' -z 77458 ']' 00:13:10.469 10:18:23 -- common/autotest_common.sh@930 -- # kill -0 77458 00:13:10.469 10:18:23 -- common/autotest_common.sh@931 -- # uname 00:13:10.469 10:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.469 10:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77458 00:13:10.469 10:18:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:10.469 10:18:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:10.469 killing process with pid 77458 00:13:10.469 10:18:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77458' 00:13:10.469 10:18:23 -- common/autotest_common.sh@945 -- # kill 77458 00:13:10.469 10:18:23 -- common/autotest_common.sh@950 -- # wait 77458 00:13:10.728 10:18:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:10.728 10:18:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:10.728 10:18:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:10.728 10:18:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.728 10:18:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:10.728 10:18:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.728 10:18:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.728 10:18:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.728 10:18:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:10.728 10:18:24 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:10.728 00:13:10.728 real 0m14.190s 00:13:10.728 user 0m19.013s 00:13:10.728 sys 0m5.938s 00:13:10.728 10:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.728 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.728 ************************************ 00:13:10.728 END TEST nvmf_fips 00:13:10.728 ************************************ 00:13:10.728 10:18:24 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:13:10.728 10:18:24 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:10.728 10:18:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:10.728 10:18:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.728 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.728 ************************************ 00:13:10.728 START TEST nvmf_fuzz 00:13:10.728 ************************************ 00:13:10.728 10:18:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:10.728 * Looking for test storage... 00:13:10.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:10.728 10:18:24 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:10.728 10:18:24 -- nvmf/common.sh@7 -- # uname -s 00:13:10.728 10:18:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.728 10:18:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.728 10:18:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.728 10:18:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.728 10:18:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.728 10:18:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.728 10:18:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.728 10:18:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.728 10:18:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.728 10:18:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.728 10:18:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:13:10.728 10:18:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:13:10.728 10:18:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.728 10:18:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.728 10:18:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:10.728 10:18:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.728 10:18:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.729 10:18:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.729 10:18:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.729 10:18:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.729 10:18:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.729 10:18:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.729 10:18:24 -- paths/export.sh@5 -- # export PATH 00:13:10.729 10:18:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.729 10:18:24 -- nvmf/common.sh@46 -- # : 0 00:13:10.729 10:18:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:10.729 10:18:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:10.729 10:18:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:10.729 10:18:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.729 10:18:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.729 10:18:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:10.729 10:18:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:10.729 10:18:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:10.729 10:18:24 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:13:10.729 10:18:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:10.729 10:18:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.729 10:18:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:10.729 10:18:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:10.729 10:18:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:10.729 10:18:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.729 10:18:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.729 10:18:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.989 10:18:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:10.989 10:18:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:10.989 10:18:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:10.989 10:18:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:10.989 10:18:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:10.989 10:18:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:10.989 10:18:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.989 10:18:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.989 10:18:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:10.989 10:18:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:10.989 10:18:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.989 10:18:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.989 10:18:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.989 10:18:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.989 10:18:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.989 10:18:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.989 10:18:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.989 10:18:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.989 10:18:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:10.989 10:18:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:10.989 Cannot find device "nvmf_tgt_br" 00:13:10.989 10:18:24 -- nvmf/common.sh@154 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.989 Cannot find device "nvmf_tgt_br2" 00:13:10.989 10:18:24 -- nvmf/common.sh@155 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:10.989 10:18:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:10.989 Cannot find device "nvmf_tgt_br" 00:13:10.989 10:18:24 -- nvmf/common.sh@157 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:10.989 Cannot find device "nvmf_tgt_br2" 00:13:10.989 10:18:24 -- nvmf/common.sh@158 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:10.989 10:18:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:10.989 10:18:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.989 10:18:24 -- nvmf/common.sh@161 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.989 10:18:24 -- nvmf/common.sh@162 -- # true 00:13:10.989 10:18:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.989 10:18:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.989 10:18:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.989 10:18:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.989 10:18:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.989 10:18:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.989 10:18:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.989 10:18:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:10.989 10:18:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:10.989 10:18:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:10.989 10:18:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:10.989 10:18:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:10.989 10:18:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:10.989 10:18:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.989 10:18:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.248 10:18:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.248 10:18:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:11.248 10:18:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:11.248 10:18:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:11.248 10:18:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:11.248 10:18:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:11.248 10:18:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:11.248 10:18:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:11.248 10:18:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:11.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:13:11.248 00:13:11.248 --- 10.0.0.2 ping statistics --- 00:13:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.248 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:11.248 10:18:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:11.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:11.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:11.248 00:13:11.248 --- 10.0.0.3 ping statistics --- 00:13:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.248 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:11.248 10:18:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:11.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:11.248 00:13:11.248 --- 10.0.0.1 ping statistics --- 00:13:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.248 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:11.248 10:18:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.248 10:18:24 -- nvmf/common.sh@421 -- # return 0 00:13:11.248 10:18:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:11.248 10:18:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.248 10:18:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:11.248 10:18:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:11.248 10:18:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.248 10:18:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:11.248 10:18:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:11.248 10:18:24 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77813 00:13:11.248 10:18:24 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:11.248 10:18:24 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.248 10:18:24 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77813 00:13:11.248 10:18:24 -- common/autotest_common.sh@819 -- # '[' -z 77813 ']' 00:13:11.248 10:18:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.248 10:18:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:11.248 10:18:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.248 10:18:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:11.248 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:12.182 10:18:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:12.182 10:18:25 -- common/autotest_common.sh@852 -- # return 0 00:13:12.182 10:18:25 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.182 10:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.182 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.182 10:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.182 10:18:25 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:12.182 10:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.182 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.183 Malloc0 00:13:12.183 10:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.183 10:18:25 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:12.183 10:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.183 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.183 10:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.183 10:18:25 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:12.183 10:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.441 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.441 10:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.441 10:18:25 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.441 10:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.441 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.441 10:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.441 10:18:25 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:12.441 10:18:25 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:12.700 Shutting down the fuzz application 00:13:12.700 10:18:26 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:12.961 Shutting down the fuzz application 00:13:12.961 10:18:26 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.961 10:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.961 10:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.961 10:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.961 10:18:26 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:12.961 10:18:26 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:12.961 10:18:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:12.961 10:18:26 -- nvmf/common.sh@116 -- # sync 00:13:12.961 10:18:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:12.961 10:18:26 -- nvmf/common.sh@119 -- # set +e 00:13:12.961 10:18:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:12.961 10:18:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:12.961 rmmod nvme_tcp 00:13:12.961 rmmod nvme_fabrics 00:13:12.961 rmmod nvme_keyring 00:13:13.219 10:18:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:13.219 10:18:26 -- nvmf/common.sh@123 -- # set -e 00:13:13.219 10:18:26 -- nvmf/common.sh@124 -- # return 0 00:13:13.219 10:18:26 -- nvmf/common.sh@477 -- # '[' -n 77813 ']' 00:13:13.219 10:18:26 -- nvmf/common.sh@478 -- # killprocess 77813 00:13:13.219 10:18:26 -- common/autotest_common.sh@926 -- # '[' -z 77813 ']' 00:13:13.219 10:18:26 -- common/autotest_common.sh@930 -- # kill -0 77813 00:13:13.219 10:18:26 -- common/autotest_common.sh@931 -- # uname 00:13:13.219 10:18:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:13.219 10:18:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77813 00:13:13.219 10:18:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:13.219 killing process with pid 77813 00:13:13.219 10:18:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:13.219 10:18:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77813' 00:13:13.219 10:18:26 -- common/autotest_common.sh@945 -- # kill 77813 00:13:13.219 10:18:26 -- common/autotest_common.sh@950 -- # wait 77813 00:13:13.478 10:18:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.478 10:18:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.478 10:18:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.478 10:18:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.478 10:18:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.478 10:18:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.478 10:18:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.478 10:18:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.478 10:18:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:13.478 10:18:26 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:13.478 00:13:13.478 real 0m2.675s 00:13:13.478 user 0m2.766s 00:13:13.478 sys 0m0.657s 00:13:13.478 10:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.478 10:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.478 ************************************ 00:13:13.478 END TEST nvmf_fuzz 00:13:13.478 ************************************ 00:13:13.478 10:18:26 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:13.478 10:18:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:13.478 10:18:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.478 10:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.478 ************************************ 00:13:13.478 START TEST nvmf_multiconnection 00:13:13.478 ************************************ 00:13:13.479 10:18:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:13.479 * Looking for test storage... 00:13:13.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.479 10:18:26 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.479 10:18:26 -- nvmf/common.sh@7 -- # uname -s 00:13:13.479 10:18:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.479 10:18:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.479 10:18:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.479 10:18:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.479 10:18:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.479 10:18:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.479 10:18:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.479 10:18:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.479 10:18:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.479 10:18:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:13:13.479 10:18:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:13:13.479 10:18:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.479 10:18:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.479 10:18:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.479 10:18:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.479 10:18:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.479 10:18:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.479 10:18:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.479 10:18:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.479 10:18:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.479 10:18:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.479 10:18:26 -- paths/export.sh@5 -- # export PATH 00:13:13.479 10:18:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.479 10:18:26 -- nvmf/common.sh@46 -- # : 0 00:13:13.479 10:18:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.479 10:18:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.479 10:18:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.479 10:18:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.479 10:18:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.479 10:18:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.479 10:18:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.479 10:18:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.479 10:18:26 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.479 10:18:26 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.479 10:18:26 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:13.479 10:18:26 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:13.479 10:18:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.479 10:18:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.479 10:18:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.479 10:18:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.479 10:18:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.479 10:18:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.479 10:18:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.479 10:18:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.479 10:18:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:13.479 10:18:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:13.479 10:18:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.479 10:18:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.479 10:18:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:13.479 10:18:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:13.479 10:18:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.479 10:18:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.479 10:18:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.479 10:18:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.479 10:18:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.479 10:18:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.479 10:18:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.479 10:18:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.479 10:18:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:13.479 10:18:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:13.738 Cannot find device "nvmf_tgt_br" 00:13:13.738 10:18:26 -- nvmf/common.sh@154 -- # true 00:13:13.738 10:18:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.738 Cannot find device "nvmf_tgt_br2" 00:13:13.738 10:18:26 -- nvmf/common.sh@155 -- # true 00:13:13.738 10:18:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:13.738 10:18:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:13.738 Cannot find device "nvmf_tgt_br" 00:13:13.738 10:18:26 -- nvmf/common.sh@157 -- # true 00:13:13.738 10:18:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:13.738 Cannot find device "nvmf_tgt_br2" 00:13:13.738 10:18:26 -- nvmf/common.sh@158 -- # true 00:13:13.738 10:18:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:13.738 10:18:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:13.738 10:18:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.738 10:18:27 -- nvmf/common.sh@161 -- # true 00:13:13.738 10:18:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.738 10:18:27 -- nvmf/common.sh@162 -- # true 00:13:13.738 10:18:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.738 10:18:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.738 10:18:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.738 10:18:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.738 10:18:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.738 10:18:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.738 10:18:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.738 10:18:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:13.738 10:18:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:13.738 10:18:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:13.738 10:18:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:13.738 10:18:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:13.738 10:18:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:13.738 10:18:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.738 10:18:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.738 10:18:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.738 10:18:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:13.738 10:18:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:13.738 10:18:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.738 10:18:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.997 10:18:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.997 10:18:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.997 10:18:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.997 10:18:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:13.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:13.997 00:13:13.997 --- 10.0.0.2 ping statistics --- 00:13:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.997 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:13.997 10:18:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:13.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:13:13.997 00:13:13.997 --- 10.0.0.3 ping statistics --- 00:13:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.997 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:13.997 10:18:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:13.997 00:13:13.997 --- 10.0.0.1 ping statistics --- 00:13:13.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.997 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:13.997 10:18:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.997 10:18:27 -- nvmf/common.sh@421 -- # return 0 00:13:13.997 10:18:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:13.997 10:18:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.997 10:18:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:13.997 10:18:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:13.997 10:18:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.997 10:18:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:13.997 10:18:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.997 10:18:27 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:13.997 10:18:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.997 10:18:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:13.997 10:18:27 -- common/autotest_common.sh@10 -- # set +x 00:13:13.997 10:18:27 -- nvmf/common.sh@469 -- # nvmfpid=78008 00:13:13.997 10:18:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.997 10:18:27 -- nvmf/common.sh@470 -- # waitforlisten 78008 00:13:13.997 10:18:27 -- common/autotest_common.sh@819 -- # '[' -z 78008 ']' 00:13:13.997 10:18:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.997 10:18:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.997 10:18:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.997 10:18:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.997 10:18:27 -- common/autotest_common.sh@10 -- # set +x 00:13:13.997 [2024-07-26 10:18:27.310561] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:13.997 [2024-07-26 10:18:27.310697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.997 [2024-07-26 10:18:27.448779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.256 [2024-07-26 10:18:27.537326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.256 [2024-07-26 10:18:27.537818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.256 [2024-07-26 10:18:27.537987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.256 [2024-07-26 10:18:27.538119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.256 [2024-07-26 10:18:27.538342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.256 [2024-07-26 10:18:27.538525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.256 [2024-07-26 10:18:27.538669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.256 [2024-07-26 10:18:27.538676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.192 10:18:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.192 10:18:28 -- common/autotest_common.sh@852 -- # return 0 00:13:15.192 10:18:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.192 10:18:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.192 10:18:28 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 [2024-07-26 10:18:28.336830] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 Malloc1 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 [2024-07-26 10:18:28.416696] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 Malloc2 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 Malloc3 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 Malloc4 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 Malloc5 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.192 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.192 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:15.192 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.192 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 Malloc6 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.451 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 Malloc7 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.451 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 Malloc8 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.451 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:15.451 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.451 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.451 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.452 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 Malloc9 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.452 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 Malloc10 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.452 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.452 10:18:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.452 10:18:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:15.452 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.452 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 Malloc11 00:13:15.711 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.711 10:18:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:15.711 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.711 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.711 10:18:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:15.711 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.711 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.711 10:18:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:15.711 10:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.711 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 10:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.711 10:18:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:15.711 10:18:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.711 10:18:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.711 10:18:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:15.711 10:18:29 -- common/autotest_common.sh@1177 -- # local i=0 00:13:15.711 10:18:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.711 10:18:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:15.711 10:18:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:18.242 10:18:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:18.242 10:18:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:18.242 10:18:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:13:18.242 10:18:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:18.242 10:18:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.242 10:18:31 -- common/autotest_common.sh@1187 -- # return 0 00:13:18.242 10:18:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:18.242 10:18:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:18.242 10:18:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:18.242 10:18:31 -- common/autotest_common.sh@1177 -- # local i=0 00:13:18.242 10:18:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.242 10:18:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:18.242 10:18:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:20.201 10:18:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:20.201 10:18:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:13:20.201 10:18:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:20.201 10:18:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:20.201 10:18:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.201 10:18:33 -- common/autotest_common.sh@1187 -- # return 0 00:13:20.201 10:18:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:20.201 10:18:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:20.201 10:18:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:20.201 10:18:33 -- common/autotest_common.sh@1177 -- # local i=0 00:13:20.201 10:18:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.201 10:18:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:20.201 10:18:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:22.104 10:18:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:22.104 10:18:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:22.104 10:18:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:13:22.104 10:18:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:22.104 10:18:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.104 10:18:35 -- common/autotest_common.sh@1187 -- # return 0 00:13:22.104 10:18:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:22.104 10:18:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:22.362 10:18:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:22.362 10:18:35 -- common/autotest_common.sh@1177 -- # local i=0 00:13:22.362 10:18:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.362 10:18:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:22.362 10:18:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:24.263 10:18:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:24.264 10:18:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:24.264 10:18:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:13:24.264 10:18:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:24.264 10:18:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.264 10:18:37 -- common/autotest_common.sh@1187 -- # return 0 00:13:24.264 10:18:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.264 10:18:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:24.522 10:18:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:24.522 10:18:37 -- common/autotest_common.sh@1177 -- # local i=0 00:13:24.522 10:18:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.522 10:18:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:24.522 10:18:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:26.449 10:18:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:26.449 10:18:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:26.449 10:18:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:13:26.449 10:18:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:26.449 10:18:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.449 10:18:39 -- common/autotest_common.sh@1187 -- # return 0 00:13:26.449 10:18:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:26.449 10:18:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:26.449 10:18:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:26.449 10:18:39 -- common/autotest_common.sh@1177 -- # local i=0 00:13:26.449 10:18:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.449 10:18:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:26.449 10:18:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:28.983 10:18:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:28.983 10:18:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:28.983 10:18:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:13:28.983 10:18:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:28.983 10:18:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.983 10:18:41 -- common/autotest_common.sh@1187 -- # return 0 00:13:28.983 10:18:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:28.983 10:18:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:28.983 10:18:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:28.983 10:18:42 -- common/autotest_common.sh@1177 -- # local i=0 00:13:28.983 10:18:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.983 10:18:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:28.983 10:18:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:30.893 10:18:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:30.893 10:18:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:30.893 10:18:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:13:30.893 10:18:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:30.893 10:18:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.893 10:18:44 -- common/autotest_common.sh@1187 -- # return 0 00:13:30.893 10:18:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:30.893 10:18:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:30.893 10:18:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:30.893 10:18:44 -- common/autotest_common.sh@1177 -- # local i=0 00:13:30.893 10:18:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.893 10:18:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:30.893 10:18:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:32.796 10:18:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:32.796 10:18:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:32.796 10:18:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:13:32.796 10:18:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:32.796 10:18:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.796 10:18:46 -- common/autotest_common.sh@1187 -- # return 0 00:13:32.796 10:18:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.796 10:18:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:33.054 10:18:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:33.054 10:18:46 -- common/autotest_common.sh@1177 -- # local i=0 00:13:33.054 10:18:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.054 10:18:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:33.054 10:18:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:34.957 10:18:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:34.957 10:18:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:34.957 10:18:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:13:34.957 10:18:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:34.957 10:18:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.957 10:18:48 -- common/autotest_common.sh@1187 -- # return 0 00:13:34.957 10:18:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:34.957 10:18:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:35.215 10:18:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:35.215 10:18:48 -- common/autotest_common.sh@1177 -- # local i=0 00:13:35.215 10:18:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.215 10:18:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:35.215 10:18:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:37.114 10:18:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:37.114 10:18:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:37.114 10:18:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:13:37.372 10:18:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:37.372 10:18:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.372 10:18:50 -- common/autotest_common.sh@1187 -- # return 0 00:13:37.372 10:18:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:37.372 10:18:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:37.372 10:18:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:37.372 10:18:50 -- common/autotest_common.sh@1177 -- # local i=0 00:13:37.372 10:18:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.372 10:18:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:37.372 10:18:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:39.906 10:18:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:39.906 10:18:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:13:39.906 10:18:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:39.906 10:18:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:39.906 10:18:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.906 10:18:52 -- common/autotest_common.sh@1187 -- # return 0 00:13:39.906 10:18:52 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:39.906 [global] 00:13:39.906 thread=1 00:13:39.906 invalidate=1 00:13:39.906 rw=read 00:13:39.906 time_based=1 00:13:39.906 runtime=10 00:13:39.906 ioengine=libaio 00:13:39.906 direct=1 00:13:39.906 bs=262144 00:13:39.906 iodepth=64 00:13:39.906 norandommap=1 00:13:39.906 numjobs=1 00:13:39.906 00:13:39.906 [job0] 00:13:39.906 filename=/dev/nvme0n1 00:13:39.906 [job1] 00:13:39.906 filename=/dev/nvme10n1 00:13:39.906 [job2] 00:13:39.906 filename=/dev/nvme1n1 00:13:39.906 [job3] 00:13:39.906 filename=/dev/nvme2n1 00:13:39.906 [job4] 00:13:39.906 filename=/dev/nvme3n1 00:13:39.906 [job5] 00:13:39.906 filename=/dev/nvme4n1 00:13:39.906 [job6] 00:13:39.906 filename=/dev/nvme5n1 00:13:39.906 [job7] 00:13:39.906 filename=/dev/nvme6n1 00:13:39.906 [job8] 00:13:39.906 filename=/dev/nvme7n1 00:13:39.906 [job9] 00:13:39.906 filename=/dev/nvme8n1 00:13:39.906 [job10] 00:13:39.906 filename=/dev/nvme9n1 00:13:39.906 Could not set queue depth (nvme0n1) 00:13:39.906 Could not set queue depth (nvme10n1) 00:13:39.906 Could not set queue depth (nvme1n1) 00:13:39.906 Could not set queue depth (nvme2n1) 00:13:39.906 Could not set queue depth (nvme3n1) 00:13:39.906 Could not set queue depth (nvme4n1) 00:13:39.906 Could not set queue depth (nvme5n1) 00:13:39.906 Could not set queue depth (nvme6n1) 00:13:39.906 Could not set queue depth (nvme7n1) 00:13:39.906 Could not set queue depth (nvme8n1) 00:13:39.906 Could not set queue depth (nvme9n1) 00:13:39.906 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:39.906 fio-3.35 00:13:39.906 Starting 11 threads 00:13:52.107 00:13:52.107 job0: (groupid=0, jobs=1): err= 0: pid=78472: Fri Jul 26 10:19:03 2024 00:13:52.107 read: IOPS=605, BW=151MiB/s (159MB/s)(1526MiB/10083msec) 00:13:52.107 slat (usec): min=19, max=41661, avg=1633.59, stdev=3687.28 00:13:52.107 clat (msec): min=42, max=210, avg=103.97, stdev=14.38 00:13:52.107 lat (msec): min=42, max=210, avg=105.60, stdev=14.55 00:13:52.107 clat percentiles (msec): 00:13:52.107 | 1.00th=[ 62], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 94], 00:13:52.108 | 30.00th=[ 97], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:13:52.108 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 123], 00:13:52.108 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 190], 99.95th=[ 201], 00:13:52.108 | 99.99th=[ 211] 00:13:52.108 bw ( KiB/s): min=138240, max=178688, per=9.09%, avg=154594.60, stdev=14667.76, samples=20 00:13:52.108 iops : min= 540, max= 698, avg=603.85, stdev=57.32, samples=20 00:13:52.108 lat (msec) : 50=0.57%, 100=36.74%, 250=62.69% 00:13:52.108 cpu : usr=0.33%, sys=2.51%, ctx=1418, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=6103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job1: (groupid=0, jobs=1): err= 0: pid=78473: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=434, BW=109MiB/s (114MB/s)(1100MiB/10113msec) 00:13:52.108 slat (usec): min=18, max=58516, avg=2269.76, stdev=5129.74 00:13:52.108 clat (msec): min=39, max=249, avg=144.67, stdev=17.85 00:13:52.108 lat (msec): min=39, max=249, avg=146.94, stdev=18.32 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 81], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 133], 00:13:52.108 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 146], 00:13:52.108 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:13:52.108 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 234], 99.95th=[ 234], 00:13:52.108 | 99.99th=[ 251] 00:13:52.108 bw ( KiB/s): min=94208, max=121344, per=6.53%, avg=111001.60, stdev=9279.56, samples=20 00:13:52.108 iops : min= 368, max= 474, avg=433.60, stdev=36.25, samples=20 00:13:52.108 lat (msec) : 50=0.18%, 100=1.25%, 250=98.57% 00:13:52.108 cpu : usr=0.25%, sys=1.64%, ctx=1038, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=4399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job2: (groupid=0, jobs=1): err= 0: pid=78474: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=429, BW=107MiB/s (113MB/s)(1086MiB/10109msec) 00:13:52.108 slat (usec): min=22, max=56547, avg=2298.23, stdev=5424.92 00:13:52.108 clat (msec): min=73, max=241, avg=146.46, stdev=16.72 00:13:52.108 lat (msec): min=73, max=241, avg=148.76, stdev=17.09 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 110], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 134], 00:13:52.108 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 150], 00:13:52.108 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:13:52.108 | 99.00th=[ 190], 99.50th=[ 207], 99.90th=[ 234], 99.95th=[ 243], 00:13:52.108 | 99.99th=[ 243] 00:13:52.108 bw ( KiB/s): min=92672, max=119808, per=6.44%, avg=109553.05, stdev=8934.88, samples=20 00:13:52.108 iops : min= 362, max= 468, avg=427.90, stdev=34.92, samples=20 00:13:52.108 lat (msec) : 100=0.37%, 250=99.63% 00:13:52.108 cpu : usr=0.19%, sys=2.10%, ctx=1018, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=4342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job3: (groupid=0, jobs=1): err= 0: pid=78475: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=451, BW=113MiB/s (118MB/s)(1142MiB/10104msec) 00:13:52.108 slat (usec): min=18, max=79490, avg=2171.56, stdev=5269.55 00:13:52.108 clat (msec): min=30, max=239, avg=139.22, stdev=26.54 00:13:52.108 lat (msec): min=30, max=240, avg=141.39, stdev=27.01 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 56], 5.00th=[ 82], 10.00th=[ 105], 20.00th=[ 130], 00:13:52.108 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 140], 60.00th=[ 144], 00:13:52.108 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:13:52.108 | 99.00th=[ 190], 99.50th=[ 207], 99.90th=[ 230], 99.95th=[ 241], 00:13:52.108 | 99.99th=[ 241] 00:13:52.108 bw ( KiB/s): min=94208, max=177152, per=6.78%, avg=115302.40, stdev=19053.73, samples=20 00:13:52.108 iops : min= 368, max= 692, avg=450.40, stdev=74.43, samples=20 00:13:52.108 lat (msec) : 50=0.77%, 100=8.21%, 250=91.02% 00:13:52.108 cpu : usr=0.23%, sys=1.85%, ctx=1057, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=4567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job4: (groupid=0, jobs=1): err= 0: pid=78476: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=596, BW=149MiB/s (156MB/s)(1503MiB/10082msec) 00:13:52.108 slat (usec): min=17, max=34247, avg=1659.31, stdev=3772.67 00:13:52.108 clat (msec): min=36, max=188, avg=105.55, stdev=14.56 00:13:52.108 lat (msec): min=37, max=188, avg=107.21, stdev=14.60 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 66], 5.00th=[ 83], 10.00th=[ 88], 20.00th=[ 95], 00:13:52.108 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 110], 00:13:52.108 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 128], 00:13:52.108 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 176], 99.95th=[ 180], 00:13:52.108 | 99.99th=[ 188] 00:13:52.108 bw ( KiB/s): min=136704, max=178688, per=8.96%, avg=152310.60, stdev=13883.66, samples=20 00:13:52.108 iops : min= 534, max= 698, avg=594.95, stdev=54.22, samples=20 00:13:52.108 lat (msec) : 50=0.07%, 100=33.67%, 250=66.27% 00:13:52.108 cpu : usr=0.30%, sys=2.29%, ctx=1322, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=6012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job5: (groupid=0, jobs=1): err= 0: pid=78477: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=433, BW=108MiB/s (114MB/s)(1096MiB/10112msec) 00:13:52.108 slat (usec): min=18, max=70268, avg=2278.19, stdev=5399.27 00:13:52.108 clat (msec): min=50, max=240, avg=145.23, stdev=17.08 00:13:52.108 lat (msec): min=50, max=240, avg=147.51, stdev=17.52 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 114], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:13:52.108 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 146], 00:13:52.108 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:13:52.108 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 228], 99.95th=[ 241], 00:13:52.108 | 99.99th=[ 241] 00:13:52.108 bw ( KiB/s): min=90112, max=121856, per=6.51%, avg=110577.10, stdev=9733.89, samples=20 00:13:52.108 iops : min= 352, max= 476, avg=431.90, stdev=38.04, samples=20 00:13:52.108 lat (msec) : 100=0.46%, 250=99.54% 00:13:52.108 cpu : usr=0.27%, sys=2.01%, ctx=1017, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=4382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.108 job6: (groupid=0, jobs=1): err= 0: pid=78478: Fri Jul 26 10:19:03 2024 00:13:52.108 read: IOPS=983, BW=246MiB/s (258MB/s)(2466MiB/10023msec) 00:13:52.108 slat (usec): min=21, max=36271, avg=1009.92, stdev=2414.07 00:13:52.108 clat (msec): min=19, max=133, avg=63.95, stdev=10.10 00:13:52.108 lat (msec): min=19, max=133, avg=64.96, stdev=10.19 00:13:52.108 clat percentiles (msec): 00:13:52.108 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:13:52.108 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:52.108 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 82], 00:13:52.108 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 130], 99.95th=[ 130], 00:13:52.108 | 99.99th=[ 134] 00:13:52.108 bw ( KiB/s): min=176128, max=278528, per=14.76%, avg=250854.40, stdev=30223.06, samples=20 00:13:52.108 iops : min= 688, max= 1088, avg=979.90, stdev=118.06, samples=20 00:13:52.108 lat (msec) : 20=0.03%, 50=0.93%, 100=96.94%, 250=2.10% 00:13:52.108 cpu : usr=0.49%, sys=3.60%, ctx=1931, majf=0, minf=4097 00:13:52.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:52.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.108 issued rwts: total=9862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.109 job7: (groupid=0, jobs=1): err= 0: pid=78479: Fri Jul 26 10:19:03 2024 00:13:52.109 read: IOPS=600, BW=150MiB/s (157MB/s)(1513MiB/10082msec) 00:13:52.109 slat (usec): min=22, max=38225, avg=1649.52, stdev=3724.99 00:13:52.109 clat (msec): min=24, max=188, avg=104.83, stdev=13.58 00:13:52.109 lat (msec): min=24, max=188, avg=106.48, stdev=13.71 00:13:52.109 clat percentiles (msec): 00:13:52.109 | 1.00th=[ 73], 5.00th=[ 85], 10.00th=[ 89], 20.00th=[ 94], 00:13:52.109 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 110], 00:13:52.109 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 125], 00:13:52.109 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 184], 99.95th=[ 184], 00:13:52.109 | 99.99th=[ 188] 00:13:52.109 bw ( KiB/s): min=137728, max=178176, per=9.01%, avg=153241.60, stdev=13922.06, samples=20 00:13:52.109 iops : min= 538, max= 696, avg=598.60, stdev=54.38, samples=20 00:13:52.109 lat (msec) : 50=0.23%, 100=35.14%, 250=64.63% 00:13:52.109 cpu : usr=0.27%, sys=2.64%, ctx=1348, majf=0, minf=4097 00:13:52.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.109 issued rwts: total=6050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.109 job8: (groupid=0, jobs=1): err= 0: pid=78480: Fri Jul 26 10:19:03 2024 00:13:52.109 read: IOPS=428, BW=107MiB/s (112MB/s)(1084MiB/10109msec) 00:13:52.109 slat (usec): min=19, max=101838, avg=2302.96, stdev=5687.48 00:13:52.109 clat (msec): min=35, max=244, avg=146.64, stdev=16.75 00:13:52.109 lat (msec): min=35, max=247, avg=148.95, stdev=17.19 00:13:52.109 clat percentiles (msec): 00:13:52.109 | 1.00th=[ 118], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 134], 00:13:52.109 | 30.00th=[ 136], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 148], 00:13:52.109 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:13:52.109 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 228], 99.95th=[ 228], 00:13:52.109 | 99.99th=[ 245] 00:13:52.109 bw ( KiB/s): min=86528, max=121856, per=6.43%, avg=109365.20, stdev=10389.39, samples=20 00:13:52.109 iops : min= 338, max= 476, avg=427.20, stdev=40.58, samples=20 00:13:52.109 lat (msec) : 50=0.12%, 250=99.88% 00:13:52.109 cpu : usr=0.21%, sys=1.77%, ctx=1028, majf=0, minf=4097 00:13:52.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.109 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.109 job9: (groupid=0, jobs=1): err= 0: pid=78481: Fri Jul 26 10:19:03 2024 00:13:52.109 read: IOPS=721, BW=180MiB/s (189MB/s)(1820MiB/10086msec) 00:13:52.109 slat (usec): min=18, max=84416, avg=1347.09, stdev=3791.59 00:13:52.109 clat (usec): min=1246, max=201562, avg=87205.52, stdev=43891.02 00:13:52.109 lat (usec): min=1298, max=204968, avg=88552.61, stdev=44545.00 00:13:52.109 clat percentiles (msec): 00:13:52.109 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:13:52.109 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 104], 60.00th=[ 111], 00:13:52.109 | 70.00th=[ 115], 80.00th=[ 122], 90.00th=[ 142], 95.00th=[ 165], 00:13:52.109 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 199], 00:13:52.109 | 99.99th=[ 203] 00:13:52.109 bw ( KiB/s): min=96256, max=382464, per=10.87%, avg=184791.25, stdev=100505.72, samples=20 00:13:52.109 iops : min= 376, max= 1494, avg=721.80, stdev=392.63, samples=20 00:13:52.109 lat (msec) : 2=0.01%, 4=0.03%, 10=0.47%, 20=1.17%, 50=41.48% 00:13:52.109 lat (msec) : 100=4.02%, 250=52.82% 00:13:52.109 cpu : usr=0.28%, sys=2.54%, ctx=1590, majf=0, minf=4097 00:13:52.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.109 issued rwts: total=7281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.109 job10: (groupid=0, jobs=1): err= 0: pid=78482: Fri Jul 26 10:19:03 2024 00:13:52.109 read: IOPS=980, BW=245MiB/s (257MB/s)(2454MiB/10016msec) 00:13:52.109 slat (usec): min=21, max=45595, avg=1014.75, stdev=2515.20 00:13:52.109 clat (msec): min=13, max=131, avg=64.20, stdev=11.13 00:13:52.109 lat (msec): min=16, max=131, avg=65.22, stdev=11.18 00:13:52.109 clat percentiles (msec): 00:13:52.109 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:13:52.109 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:52.109 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 84], 00:13:52.109 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 131], 00:13:52.109 | 99.99th=[ 132] 00:13:52.109 bw ( KiB/s): min=145699, max=273408, per=14.69%, avg=249716.95, stdev=33371.07, samples=20 00:13:52.109 iops : min= 569, max= 1068, avg=975.45, stdev=130.38, samples=20 00:13:52.109 lat (msec) : 20=0.18%, 50=0.84%, 100=96.08%, 250=2.90% 00:13:52.109 cpu : usr=0.45%, sys=3.31%, ctx=1877, majf=0, minf=4097 00:13:52.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:52.109 issued rwts: total=9817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.109 00:13:52.109 Run status group 0 (all jobs): 00:13:52.109 READ: bw=1660MiB/s (1741MB/s), 107MiB/s-246MiB/s (112MB/s-258MB/s), io=16.4GiB (17.6GB), run=10016-10113msec 00:13:52.109 00:13:52.109 Disk stats (read/write): 00:13:52.109 nvme0n1: ios=12115/0, merge=0/0, ticks=1237501/0, in_queue=1237501, util=98.06% 00:13:52.109 nvme10n1: ios=8687/0, merge=0/0, ticks=1232323/0, in_queue=1232323, util=98.23% 00:13:52.109 nvme1n1: ios=8587/0, merge=0/0, ticks=1231254/0, in_queue=1231254, util=98.30% 00:13:52.109 nvme2n1: ios=9030/0, merge=0/0, ticks=1231579/0, in_queue=1231579, util=98.35% 00:13:52.109 nvme3n1: ios=11917/0, merge=0/0, ticks=1235991/0, in_queue=1235991, util=98.41% 00:13:52.109 nvme4n1: ios=8655/0, merge=0/0, ticks=1232642/0, in_queue=1232642, util=98.69% 00:13:52.109 nvme5n1: ios=19139/0, merge=0/0, ticks=1210499/0, in_queue=1210499, util=98.72% 00:13:52.109 nvme6n1: ios=11999/0, merge=0/0, ticks=1236418/0, in_queue=1236418, util=98.69% 00:13:52.109 nvme7n1: ios=8567/0, merge=0/0, ticks=1231426/0, in_queue=1231426, util=98.94% 00:13:52.109 nvme8n1: ios=14463/0, merge=0/0, ticks=1238133/0, in_queue=1238133, util=99.05% 00:13:52.109 nvme9n1: ios=19038/0, merge=0/0, ticks=1211732/0, in_queue=1211732, util=99.04% 00:13:52.109 10:19:03 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:52.109 [global] 00:13:52.109 thread=1 00:13:52.109 invalidate=1 00:13:52.109 rw=randwrite 00:13:52.109 time_based=1 00:13:52.109 runtime=10 00:13:52.109 ioengine=libaio 00:13:52.109 direct=1 00:13:52.109 bs=262144 00:13:52.109 iodepth=64 00:13:52.109 norandommap=1 00:13:52.109 numjobs=1 00:13:52.109 00:13:52.109 [job0] 00:13:52.109 filename=/dev/nvme0n1 00:13:52.109 [job1] 00:13:52.109 filename=/dev/nvme10n1 00:13:52.109 [job2] 00:13:52.109 filename=/dev/nvme1n1 00:13:52.109 [job3] 00:13:52.109 filename=/dev/nvme2n1 00:13:52.109 [job4] 00:13:52.109 filename=/dev/nvme3n1 00:13:52.109 [job5] 00:13:52.109 filename=/dev/nvme4n1 00:13:52.109 [job6] 00:13:52.109 filename=/dev/nvme5n1 00:13:52.109 [job7] 00:13:52.109 filename=/dev/nvme6n1 00:13:52.109 [job8] 00:13:52.109 filename=/dev/nvme7n1 00:13:52.109 [job9] 00:13:52.109 filename=/dev/nvme8n1 00:13:52.109 [job10] 00:13:52.109 filename=/dev/nvme9n1 00:13:52.109 Could not set queue depth (nvme0n1) 00:13:52.109 Could not set queue depth (nvme10n1) 00:13:52.109 Could not set queue depth (nvme1n1) 00:13:52.109 Could not set queue depth (nvme2n1) 00:13:52.109 Could not set queue depth (nvme3n1) 00:13:52.109 Could not set queue depth (nvme4n1) 00:13:52.109 Could not set queue depth (nvme5n1) 00:13:52.109 Could not set queue depth (nvme6n1) 00:13:52.109 Could not set queue depth (nvme7n1) 00:13:52.109 Could not set queue depth (nvme8n1) 00:13:52.109 Could not set queue depth (nvme9n1) 00:13:52.109 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:52.109 fio-3.35 00:13:52.109 Starting 11 threads 00:14:02.082 00:14:02.082 job0: (groupid=0, jobs=1): err= 0: pid=78677: Fri Jul 26 10:19:14 2024 00:14:02.082 write: IOPS=379, BW=95.0MiB/s (99.6MB/s)(964MiB/10149msec); 0 zone resets 00:14:02.082 slat (usec): min=20, max=20642, avg=2589.10, stdev=4450.13 00:14:02.082 clat (msec): min=8, max=290, avg=165.78, stdev=17.22 00:14:02.082 lat (msec): min=8, max=290, avg=168.37, stdev=16.91 00:14:02.082 clat percentiles (msec): 00:14:02.082 | 1.00th=[ 95], 5.00th=[ 144], 10.00th=[ 153], 20.00th=[ 159], 00:14:02.082 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 171], 00:14:02.082 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 176], 00:14:02.082 | 99.00th=[ 197], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 292], 00:14:02.082 | 99.99th=[ 292] 00:14:02.082 bw ( KiB/s): min=92160, max=108544, per=7.09%, avg=97075.20, stdev=4526.13, samples=20 00:14:02.082 iops : min= 360, max= 424, avg=379.20, stdev=17.68, samples=20 00:14:02.082 lat (msec) : 10=0.10%, 20=0.13%, 50=0.18%, 100=0.62%, 250=98.50% 00:14:02.082 lat (msec) : 500=0.47% 00:14:02.082 cpu : usr=0.90%, sys=1.19%, ctx=4804, majf=0, minf=1 00:14:02.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:02.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.082 issued rwts: total=0,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job1: (groupid=0, jobs=1): err= 0: pid=78683: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=546, BW=137MiB/s (143MB/s)(1378MiB/10090msec); 0 zone resets 00:14:02.083 slat (usec): min=18, max=102956, avg=1809.49, stdev=3339.91 00:14:02.083 clat (msec): min=88, max=234, avg=115.29, stdev=11.43 00:14:02.083 lat (msec): min=94, max=234, avg=117.10, stdev=11.12 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 110], 00:14:02.083 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 118], 00:14:02.083 | 70.00th=[ 118], 80.00th=[ 120], 90.00th=[ 121], 95.00th=[ 122], 00:14:02.083 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 234], 00:14:02.083 | 99.99th=[ 234] 00:14:02.083 bw ( KiB/s): min=96256, max=158720, per=10.18%, avg=139505.85, stdev=12426.13, samples=20 00:14:02.083 iops : min= 376, max= 620, avg=544.90, stdev=48.53, samples=20 00:14:02.083 lat (msec) : 100=4.15%, 250=95.85% 00:14:02.083 cpu : usr=1.42%, sys=1.30%, ctx=7316, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,5513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job2: (groupid=0, jobs=1): err= 0: pid=78689: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=548, BW=137MiB/s (144MB/s)(1385MiB/10094msec); 0 zone resets 00:14:02.083 slat (usec): min=20, max=52603, avg=1800.44, stdev=3118.74 00:14:02.083 clat (msec): min=56, max=190, avg=114.80, stdev=10.17 00:14:02.083 lat (msec): min=56, max=190, avg=116.61, stdev= 9.85 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 95], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 110], 00:14:02.083 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 118], 00:14:02.083 | 70.00th=[ 118], 80.00th=[ 120], 90.00th=[ 121], 95.00th=[ 122], 00:14:02.083 | 99.00th=[ 163], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 184], 00:14:02.083 | 99.99th=[ 190] 00:14:02.083 bw ( KiB/s): min=108544, max=160256, per=10.23%, avg=140146.05, stdev=10290.30, samples=20 00:14:02.083 iops : min= 424, max= 626, avg=547.40, stdev=40.20, samples=20 00:14:02.083 lat (msec) : 100=4.57%, 250=95.43% 00:14:02.083 cpu : usr=1.49%, sys=1.69%, ctx=6535, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,5538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job3: (groupid=0, jobs=1): err= 0: pid=78690: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=638, BW=160MiB/s (167MB/s)(1610MiB/10082msec); 0 zone resets 00:14:02.083 slat (usec): min=18, max=17840, avg=1548.06, stdev=2625.79 00:14:02.083 clat (msec): min=20, max=166, avg=98.61, stdev= 7.94 00:14:02.083 lat (msec): min=20, max=166, avg=100.16, stdev= 7.64 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 95], 00:14:02.083 | 30.00th=[ 97], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 103], 00:14:02.083 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 105], 00:14:02.083 | 99.00th=[ 107], 99.50th=[ 123], 99.90th=[ 157], 99.95th=[ 161], 00:14:02.083 | 99.99th=[ 167] 00:14:02.083 bw ( KiB/s): min=155648, max=184320, per=11.92%, avg=163251.20, stdev=8641.60, samples=20 00:14:02.083 iops : min= 608, max= 720, avg=637.70, stdev=33.76, samples=20 00:14:02.083 lat (msec) : 50=0.37%, 100=41.23%, 250=58.40% 00:14:02.083 cpu : usr=1.27%, sys=1.78%, ctx=7048, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,6440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job4: (groupid=0, jobs=1): err= 0: pid=78691: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=379, BW=94.9MiB/s (99.5MB/s)(964MiB/10150msec); 0 zone resets 00:14:02.083 slat (usec): min=23, max=12185, avg=2590.35, stdev=4430.80 00:14:02.083 clat (msec): min=22, max=291, avg=165.85, stdev=16.42 00:14:02.083 lat (msec): min=22, max=291, avg=168.44, stdev=16.06 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 110], 5.00th=[ 144], 10.00th=[ 153], 20.00th=[ 161], 00:14:02.083 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 171], 00:14:02.083 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 176], 00:14:02.083 | 99.00th=[ 199], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 292], 00:14:02.083 | 99.99th=[ 292] 00:14:02.083 bw ( KiB/s): min=92160, max=108544, per=7.08%, avg=97040.15, stdev=4629.18, samples=20 00:14:02.083 iops : min= 360, max= 424, avg=379.05, stdev=18.09, samples=20 00:14:02.083 lat (msec) : 50=0.36%, 100=0.52%, 250=98.65%, 500=0.47% 00:14:02.083 cpu : usr=0.83%, sys=1.32%, ctx=5627, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,3854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job5: (groupid=0, jobs=1): err= 0: pid=78692: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=377, BW=94.4MiB/s (99.0MB/s)(958MiB/10146msec); 0 zone resets 00:14:02.083 slat (usec): min=19, max=61319, avg=2603.30, stdev=4554.95 00:14:02.083 clat (msec): min=63, max=293, avg=166.73, stdev=14.14 00:14:02.083 lat (msec): min=63, max=293, avg=169.33, stdev=13.60 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 142], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 161], 00:14:02.083 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 171], 00:14:02.083 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 176], 00:14:02.083 | 99.00th=[ 211], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 292], 00:14:02.083 | 99.99th=[ 292] 00:14:02.083 bw ( KiB/s): min=83968, max=108544, per=7.04%, avg=96502.50, stdev=5397.74, samples=20 00:14:02.083 iops : min= 328, max= 424, avg=376.95, stdev=21.09, samples=20 00:14:02.083 lat (msec) : 100=0.42%, 250=99.11%, 500=0.47% 00:14:02.083 cpu : usr=0.98%, sys=1.05%, ctx=4271, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,3833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job6: (groupid=0, jobs=1): err= 0: pid=78693: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=638, BW=160MiB/s (167MB/s)(1609MiB/10082msec); 0 zone resets 00:14:02.083 slat (usec): min=19, max=12887, avg=1549.50, stdev=2627.83 00:14:02.083 clat (msec): min=17, max=165, avg=98.62, stdev= 8.16 00:14:02.083 lat (msec): min=17, max=165, avg=100.17, stdev= 7.87 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 89], 20.00th=[ 95], 00:14:02.083 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:14:02.083 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 105], 00:14:02.083 | 99.00th=[ 109], 99.50th=[ 123], 99.90th=[ 155], 99.95th=[ 161], 00:14:02.083 | 99.99th=[ 167] 00:14:02.083 bw ( KiB/s): min=152064, max=184832, per=11.91%, avg=163174.40, stdev=8849.57, samples=20 00:14:02.083 iops : min= 594, max= 722, avg=637.40, stdev=34.57, samples=20 00:14:02.083 lat (msec) : 20=0.06%, 50=0.31%, 100=41.82%, 250=57.81% 00:14:02.083 cpu : usr=1.20%, sys=1.57%, ctx=8859, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,6437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job7: (groupid=0, jobs=1): err= 0: pid=78694: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=379, BW=95.0MiB/s (99.6MB/s)(964MiB/10147msec); 0 zone resets 00:14:02.083 slat (usec): min=22, max=14862, avg=2589.71, stdev=4433.70 00:14:02.083 clat (msec): min=20, max=290, avg=165.80, stdev=16.48 00:14:02.083 lat (msec): min=20, max=290, avg=168.39, stdev=16.12 00:14:02.083 clat percentiles (msec): 00:14:02.083 | 1.00th=[ 108], 5.00th=[ 144], 10.00th=[ 153], 20.00th=[ 159], 00:14:02.083 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 171], 00:14:02.083 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 176], 00:14:02.083 | 99.00th=[ 197], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 292], 00:14:02.083 | 99.99th=[ 292] 00:14:02.083 bw ( KiB/s): min=92672, max=109056, per=7.08%, avg=97040.15, stdev=4673.67, samples=20 00:14:02.083 iops : min= 362, max= 426, avg=379.05, stdev=18.27, samples=20 00:14:02.083 lat (msec) : 50=0.34%, 100=0.62%, 250=98.57%, 500=0.47% 00:14:02.083 cpu : usr=0.79%, sys=1.39%, ctx=5533, majf=0, minf=1 00:14:02.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:02.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.083 issued rwts: total=0,3854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.083 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.083 job8: (groupid=0, jobs=1): err= 0: pid=78695: Fri Jul 26 10:19:14 2024 00:14:02.083 write: IOPS=494, BW=124MiB/s (130MB/s)(1249MiB/10108msec); 0 zone resets 00:14:02.083 slat (usec): min=22, max=65074, avg=1971.97, stdev=3513.76 00:14:02.083 clat (msec): min=23, max=218, avg=127.42, stdev=15.55 00:14:02.083 lat (msec): min=23, max=218, avg=129.40, stdev=15.45 00:14:02.083 clat percentiles (msec): 00:14:02.084 | 1.00th=[ 52], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 123], 00:14:02.084 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 132], 60.00th=[ 132], 00:14:02.084 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 136], 00:14:02.084 | 99.00th=[ 184], 99.50th=[ 197], 99.90th=[ 211], 99.95th=[ 211], 00:14:02.084 | 99.99th=[ 220] 00:14:02.084 bw ( KiB/s): min=122880, max=139776, per=9.22%, avg=126311.00, stdev=5976.50, samples=20 00:14:02.084 iops : min= 480, max= 546, avg=493.40, stdev=23.35, samples=20 00:14:02.084 lat (msec) : 50=0.98%, 100=1.62%, 250=97.40% 00:14:02.084 cpu : usr=0.94%, sys=1.28%, ctx=6246, majf=0, minf=1 00:14:02.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:02.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.084 issued rwts: total=0,4997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.084 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.084 job9: (groupid=0, jobs=1): err= 0: pid=78696: Fri Jul 26 10:19:14 2024 00:14:02.084 write: IOPS=496, BW=124MiB/s (130MB/s)(1254MiB/10104msec); 0 zone resets 00:14:02.084 slat (usec): min=19, max=15649, avg=1988.74, stdev=3397.84 00:14:02.084 clat (msec): min=18, max=220, avg=126.91, stdev=13.38 00:14:02.084 lat (msec): min=18, max=220, avg=128.90, stdev=13.17 00:14:02.084 clat percentiles (msec): 00:14:02.084 | 1.00th=[ 71], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 123], 00:14:02.084 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 132], 00:14:02.084 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 136], 00:14:02.084 | 99.00th=[ 138], 99.50th=[ 176], 99.90th=[ 213], 99.95th=[ 213], 00:14:02.084 | 99.99th=[ 220] 00:14:02.084 bw ( KiB/s): min=122368, max=141595, per=9.25%, avg=126773.05, stdev=6605.06, samples=20 00:14:02.084 iops : min= 478, max= 553, avg=495.20, stdev=25.79, samples=20 00:14:02.084 lat (msec) : 20=0.08%, 50=0.64%, 100=1.79%, 250=97.49% 00:14:02.084 cpu : usr=1.11%, sys=1.34%, ctx=7590, majf=0, minf=1 00:14:02.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:02.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.084 issued rwts: total=0,5015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.084 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.084 job10: (groupid=0, jobs=1): err= 0: pid=78697: Fri Jul 26 10:19:14 2024 00:14:02.084 write: IOPS=493, BW=123MiB/s (129MB/s)(1247MiB/10106msec); 0 zone resets 00:14:02.084 slat (usec): min=21, max=46463, avg=2000.57, stdev=3453.69 00:14:02.084 clat (msec): min=49, max=216, avg=127.67, stdev=10.35 00:14:02.084 lat (msec): min=49, max=216, avg=129.68, stdev= 9.93 00:14:02.084 clat percentiles (msec): 00:14:02.084 | 1.00th=[ 100], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 123], 00:14:02.084 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 132], 60.00th=[ 132], 00:14:02.084 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 136], 00:14:02.084 | 99.00th=[ 138], 99.50th=[ 174], 99.90th=[ 209], 99.95th=[ 209], 00:14:02.084 | 99.99th=[ 218] 00:14:02.084 bw ( KiB/s): min=121856, max=139776, per=9.20%, avg=126028.80, stdev=5862.49, samples=20 00:14:02.084 iops : min= 476, max= 546, avg=492.25, stdev=22.91, samples=20 00:14:02.084 lat (msec) : 50=0.08%, 100=1.16%, 250=98.76% 00:14:02.084 cpu : usr=1.12%, sys=1.54%, ctx=5797, majf=0, minf=1 00:14:02.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:02.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:02.084 issued rwts: total=0,4986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.084 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.084 00:14:02.084 Run status group 0 (all jobs): 00:14:02.084 WRITE: bw=1338MiB/s (1403MB/s), 94.4MiB/s-160MiB/s (99.0MB/s-167MB/s), io=13.3GiB (14.2GB), run=10082-10150msec 00:14:02.084 00:14:02.084 Disk stats (read/write): 00:14:02.084 nvme0n1: ios=49/7571, merge=0/0, ticks=44/1213155, in_queue=1213199, util=97.95% 00:14:02.084 nvme10n1: ios=47/10848, merge=0/0, ticks=56/1212170, in_queue=1212226, util=97.91% 00:14:02.084 nvme1n1: ios=29/10898, merge=0/0, ticks=41/1212485, in_queue=1212526, util=97.94% 00:14:02.084 nvme2n1: ios=20/12711, merge=0/0, ticks=37/1213375, in_queue=1213412, util=98.09% 00:14:02.084 nvme3n1: ios=0/7555, merge=0/0, ticks=0/1210575, in_queue=1210575, util=97.97% 00:14:02.084 nvme4n1: ios=0/7512, merge=0/0, ticks=0/1210251, in_queue=1210251, util=98.18% 00:14:02.084 nvme5n1: ios=0/12705, merge=0/0, ticks=0/1212886, in_queue=1212886, util=98.36% 00:14:02.084 nvme6n1: ios=0/7554, merge=0/0, ticks=0/1209757, in_queue=1209757, util=98.38% 00:14:02.084 nvme7n1: ios=0/9832, merge=0/0, ticks=0/1212896, in_queue=1212896, util=98.68% 00:14:02.084 nvme8n1: ios=0/9875, merge=0/0, ticks=0/1212145, in_queue=1212145, util=98.81% 00:14:02.084 nvme9n1: ios=0/9807, merge=0/0, ticks=0/1212476, in_queue=1212476, util=98.88% 00:14:02.084 10:19:14 -- target/multiconnection.sh@36 -- # sync 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # seq 1 11 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:14:02.084 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:14:02.084 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.084 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.084 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.084 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:14:02.084 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:14:02.084 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.084 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:02.084 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.084 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:14:02.084 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:14:02.084 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.084 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:02.084 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.084 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:14:02.084 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:14:02.084 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.084 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:02.084 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.084 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:14:02.084 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:14:02.084 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.084 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.084 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:14:02.084 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.084 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.084 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.084 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.084 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:14:02.084 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:14:02.084 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:14:02.085 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:14:02.085 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:14:02.085 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.085 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:14:02.085 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:14:02.085 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:14:02.085 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:14:02.085 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:14:02.085 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.085 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:14:02.085 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:14:02.085 10:19:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:14:02.085 10:19:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:14:02.085 10:19:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:14:02.085 10:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.085 10:19:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:14:02.085 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:14:02.085 10:19:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:14:02.085 10:19:15 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:14:02.085 10:19:15 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:14:02.085 10:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.085 10:19:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:14:02.085 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:14:02.085 10:19:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:14:02.085 10:19:15 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:14:02.085 10:19:15 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:14:02.085 10:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.085 10:19:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:14:02.085 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:14:02.085 10:19:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:14:02.085 10:19:15 -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:02.085 10:19:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:14:02.085 10:19:15 -- common/autotest_common.sh@1210 -- # return 0 00:14:02.085 10:19:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:14:02.085 10:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.085 10:19:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.085 10:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.085 10:19:15 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:14:02.085 10:19:15 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:02.085 10:19:15 -- target/multiconnection.sh@47 -- # nvmftestfini 00:14:02.085 10:19:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:02.085 10:19:15 -- nvmf/common.sh@116 -- # sync 00:14:02.085 10:19:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:02.085 10:19:15 -- nvmf/common.sh@119 -- # set +e 00:14:02.085 10:19:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:02.085 10:19:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:02.085 rmmod nvme_tcp 00:14:02.085 rmmod nvme_fabrics 00:14:02.085 rmmod nvme_keyring 00:14:02.085 10:19:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:02.085 10:19:15 -- nvmf/common.sh@123 -- # set -e 00:14:02.085 10:19:15 -- nvmf/common.sh@124 -- # return 0 00:14:02.085 10:19:15 -- nvmf/common.sh@477 -- # '[' -n 78008 ']' 00:14:02.085 10:19:15 -- nvmf/common.sh@478 -- # killprocess 78008 00:14:02.085 10:19:15 -- common/autotest_common.sh@926 -- # '[' -z 78008 ']' 00:14:02.085 10:19:15 -- common/autotest_common.sh@930 -- # kill -0 78008 00:14:02.085 10:19:15 -- common/autotest_common.sh@931 -- # uname 00:14:02.085 10:19:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:02.085 10:19:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78008 00:14:02.085 killing process with pid 78008 00:14:02.085 10:19:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:02.085 10:19:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:02.085 10:19:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78008' 00:14:02.085 10:19:15 -- common/autotest_common.sh@945 -- # kill 78008 00:14:02.085 10:19:15 -- common/autotest_common.sh@950 -- # wait 78008 00:14:02.343 10:19:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:02.343 10:19:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:02.343 10:19:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:02.343 10:19:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.343 10:19:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:02.343 10:19:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.343 10:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.343 10:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.343 10:19:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:02.343 ************************************ 00:14:02.343 END TEST nvmf_multiconnection 00:14:02.343 ************************************ 00:14:02.343 00:14:02.343 real 0m48.979s 00:14:02.343 user 2m44.641s 00:14:02.343 sys 0m31.016s 00:14:02.343 10:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.343 10:19:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.602 10:19:15 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:02.602 10:19:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.602 10:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.602 10:19:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.602 ************************************ 00:14:02.602 START TEST nvmf_initiator_timeout 00:14:02.602 ************************************ 00:14:02.602 10:19:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:02.602 * Looking for test storage... 00:14:02.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.602 10:19:15 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.602 10:19:15 -- nvmf/common.sh@7 -- # uname -s 00:14:02.602 10:19:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.602 10:19:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.602 10:19:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.602 10:19:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.602 10:19:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.602 10:19:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.602 10:19:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.602 10:19:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.602 10:19:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.602 10:19:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.602 10:19:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:14:02.602 10:19:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:14:02.602 10:19:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.602 10:19:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.602 10:19:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.602 10:19:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.602 10:19:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.602 10:19:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.602 10:19:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.602 10:19:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.602 10:19:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.602 10:19:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.602 10:19:15 -- paths/export.sh@5 -- # export PATH 00:14:02.602 10:19:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.602 10:19:15 -- nvmf/common.sh@46 -- # : 0 00:14:02.602 10:19:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:02.602 10:19:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:02.602 10:19:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:02.603 10:19:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.603 10:19:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.603 10:19:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:02.603 10:19:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:02.603 10:19:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:02.603 10:19:15 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.603 10:19:15 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.603 10:19:15 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:14:02.603 10:19:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:02.603 10:19:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.603 10:19:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:02.603 10:19:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:02.603 10:19:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:02.603 10:19:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.603 10:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.603 10:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.603 10:19:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:02.603 10:19:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:02.603 10:19:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:02.603 10:19:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:02.603 10:19:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:02.603 10:19:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:02.603 10:19:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.603 10:19:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.603 10:19:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:02.603 10:19:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:02.603 10:19:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.603 10:19:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.603 10:19:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.603 10:19:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.603 10:19:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.603 10:19:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.603 10:19:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.603 10:19:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.603 10:19:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:02.603 10:19:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:02.603 Cannot find device "nvmf_tgt_br" 00:14:02.603 10:19:15 -- nvmf/common.sh@154 -- # true 00:14:02.603 10:19:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.603 Cannot find device "nvmf_tgt_br2" 00:14:02.603 10:19:15 -- nvmf/common.sh@155 -- # true 00:14:02.603 10:19:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:02.603 10:19:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:02.603 Cannot find device "nvmf_tgt_br" 00:14:02.603 10:19:15 -- nvmf/common.sh@157 -- # true 00:14:02.603 10:19:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:02.603 Cannot find device "nvmf_tgt_br2" 00:14:02.603 10:19:15 -- nvmf/common.sh@158 -- # true 00:14:02.603 10:19:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:02.603 10:19:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:02.861 10:19:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.861 10:19:16 -- nvmf/common.sh@161 -- # true 00:14:02.861 10:19:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.861 10:19:16 -- nvmf/common.sh@162 -- # true 00:14:02.861 10:19:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.861 10:19:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.861 10:19:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.861 10:19:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.861 10:19:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.861 10:19:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.861 10:19:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.861 10:19:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:02.861 10:19:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:02.861 10:19:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:02.862 10:19:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:02.862 10:19:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:02.862 10:19:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:02.862 10:19:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.862 10:19:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.862 10:19:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.862 10:19:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:02.862 10:19:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:02.862 10:19:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.862 10:19:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.862 10:19:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.862 10:19:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.862 10:19:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.862 10:19:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:02.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:02.862 00:14:02.862 --- 10.0.0.2 ping statistics --- 00:14:02.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.862 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:02.862 10:19:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:02.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:02.862 00:14:02.862 --- 10.0.0.3 ping statistics --- 00:14:02.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.862 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:02.862 10:19:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:02.862 00:14:02.862 --- 10.0.0.1 ping statistics --- 00:14:02.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.862 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:02.862 10:19:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.862 10:19:16 -- nvmf/common.sh@421 -- # return 0 00:14:02.862 10:19:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:02.862 10:19:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.862 10:19:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:02.862 10:19:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:02.862 10:19:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.862 10:19:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:02.862 10:19:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:02.862 10:19:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:14:02.862 10:19:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:02.862 10:19:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:02.862 10:19:16 -- common/autotest_common.sh@10 -- # set +x 00:14:02.862 10:19:16 -- nvmf/common.sh@469 -- # nvmfpid=79062 00:14:02.862 10:19:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.862 10:19:16 -- nvmf/common.sh@470 -- # waitforlisten 79062 00:14:02.862 10:19:16 -- common/autotest_common.sh@819 -- # '[' -z 79062 ']' 00:14:02.862 10:19:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.862 10:19:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.862 10:19:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.862 10:19:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.862 10:19:16 -- common/autotest_common.sh@10 -- # set +x 00:14:02.862 [2024-07-26 10:19:16.297531] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:02.862 [2024-07-26 10:19:16.297659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.120 [2024-07-26 10:19:16.433502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.120 [2024-07-26 10:19:16.521627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:03.120 [2024-07-26 10:19:16.521812] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.120 [2024-07-26 10:19:16.521828] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.120 [2024-07-26 10:19:16.521837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.120 [2024-07-26 10:19:16.521970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.120 [2024-07-26 10:19:16.522704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.120 [2024-07-26 10:19:16.522804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.120 [2024-07-26 10:19:16.522814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.055 10:19:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:04.055 10:19:17 -- common/autotest_common.sh@852 -- # return 0 00:14:04.055 10:19:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:04.055 10:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 10:19:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 Malloc0 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 Delay0 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 [2024-07-26 10:19:17.301860] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.055 10:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.055 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:04.055 [2024-07-26 10:19:17.330041] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.055 10:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.055 10:19:17 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.055 10:19:17 -- common/autotest_common.sh@1177 -- # local i=0 00:14:04.055 10:19:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.055 10:19:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:04.055 10:19:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:06.585 10:19:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:06.585 10:19:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:06.585 10:19:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.585 10:19:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:06.585 10:19:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.585 10:19:19 -- common/autotest_common.sh@1187 -- # return 0 00:14:06.585 10:19:19 -- target/initiator_timeout.sh@35 -- # fio_pid=79132 00:14:06.585 10:19:19 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:14:06.585 10:19:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:14:06.585 [global] 00:14:06.585 thread=1 00:14:06.585 invalidate=1 00:14:06.585 rw=write 00:14:06.585 time_based=1 00:14:06.585 runtime=60 00:14:06.585 ioengine=libaio 00:14:06.585 direct=1 00:14:06.585 bs=4096 00:14:06.585 iodepth=1 00:14:06.585 norandommap=0 00:14:06.585 numjobs=1 00:14:06.585 00:14:06.585 verify_dump=1 00:14:06.585 verify_backlog=512 00:14:06.585 verify_state_save=0 00:14:06.585 do_verify=1 00:14:06.585 verify=crc32c-intel 00:14:06.585 [job0] 00:14:06.585 filename=/dev/nvme0n1 00:14:06.585 Could not set queue depth (nvme0n1) 00:14:06.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.585 fio-3.35 00:14:06.585 Starting 1 thread 00:14:09.152 10:19:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:14:09.152 10:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.152 10:19:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.152 true 00:14:09.152 10:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.152 10:19:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:14:09.152 10:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.152 10:19:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.152 true 00:14:09.152 10:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.152 10:19:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:14:09.152 10:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.152 10:19:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.152 true 00:14:09.152 10:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.152 10:19:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:14:09.152 10:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.152 10:19:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.152 true 00:14:09.152 10:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.152 10:19:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:12.436 10:19:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:12.436 10:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.436 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 true 00:14:12.436 10:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.436 10:19:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:12.436 10:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.436 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 true 00:14:12.436 10:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.436 10:19:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:12.436 10:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.436 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 true 00:14:12.436 10:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.437 10:19:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:12.437 10:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.437 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.437 true 00:14:12.437 10:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.437 10:19:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:12.437 10:19:25 -- target/initiator_timeout.sh@54 -- # wait 79132 00:15:08.662 00:15:08.662 job0: (groupid=0, jobs=1): err= 0: pid=79153: Fri Jul 26 10:20:19 2024 00:15:08.662 read: IOPS=767, BW=3070KiB/s (3143kB/s)(180MiB/60000msec) 00:15:08.662 slat (nsec): min=11726, max=67585, avg=14999.08, stdev=3731.18 00:15:08.662 clat (usec): min=160, max=7798, avg=217.23, stdev=50.72 00:15:08.662 lat (usec): min=173, max=7816, avg=232.23, stdev=51.17 00:15:08.662 clat percentiles (usec): 00:15:08.662 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:15:08.662 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:15:08.662 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:15:08.662 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 502], 99.95th=[ 603], 00:15:08.662 | 99.99th=[ 1074] 00:15:08.662 write: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec); 0 zone resets 00:15:08.662 slat (usec): min=13, max=15673, avg=22.24, stdev=81.49 00:15:08.662 clat (usec): min=118, max=40414k, avg=1044.26, stdev=188267.00 00:15:08.662 lat (usec): min=136, max=40414k, avg=1066.50, stdev=188267.01 00:15:08.662 clat percentiles (usec): 00:15:08.662 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:15:08.662 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:15:08.662 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 212], 00:15:08.662 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 490], 99.95th=[ 627], 00:15:08.662 | 99.99th=[ 2114] 00:15:08.662 bw ( KiB/s): min= 5040, max=12200, per=100.00%, avg=9257.64, stdev=1495.18, samples=39 00:15:08.662 iops : min= 1260, max= 3050, avg=2314.41, stdev=373.80, samples=39 00:15:08.662 lat (usec) : 250=93.76%, 500=6.14%, 750=0.07%, 1000=0.02% 00:15:08.662 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:15:08.662 cpu : usr=0.62%, sys=2.17%, ctx=92144, majf=0, minf=2 00:15:08.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.662 issued rwts: total=46047,46080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.662 00:15:08.662 Run status group 0 (all jobs): 00:15:08.662 READ: bw=3070KiB/s (3143kB/s), 3070KiB/s-3070KiB/s (3143kB/s-3143kB/s), io=180MiB (189MB), run=60000-60000msec 00:15:08.662 WRITE: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:15:08.662 00:15:08.662 Disk stats (read/write): 00:15:08.662 nvme0n1: ios=45925/46080, merge=0/0, ticks=10323/8274, in_queue=18597, util=99.83% 00:15:08.662 10:20:19 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.663 10:20:19 -- common/autotest_common.sh@1198 -- # local i=0 00:15:08.663 10:20:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:08.663 10:20:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.663 10:20:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:08.663 10:20:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.663 10:20:19 -- common/autotest_common.sh@1210 -- # return 0 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:15:08.663 nvmf hotplug test: fio successful as expected 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.663 10:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.663 10:20:19 -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 10:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:08.663 10:20:19 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:15:08.663 10:20:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:08.663 10:20:19 -- nvmf/common.sh@116 -- # sync 00:15:08.663 10:20:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:08.663 10:20:19 -- nvmf/common.sh@119 -- # set +e 00:15:08.663 10:20:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:08.663 10:20:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:08.663 rmmod nvme_tcp 00:15:08.663 rmmod nvme_fabrics 00:15:08.663 rmmod nvme_keyring 00:15:08.663 10:20:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:08.663 10:20:19 -- nvmf/common.sh@123 -- # set -e 00:15:08.663 10:20:19 -- nvmf/common.sh@124 -- # return 0 00:15:08.663 10:20:19 -- nvmf/common.sh@477 -- # '[' -n 79062 ']' 00:15:08.663 10:20:19 -- nvmf/common.sh@478 -- # killprocess 79062 00:15:08.663 10:20:19 -- common/autotest_common.sh@926 -- # '[' -z 79062 ']' 00:15:08.663 10:20:19 -- common/autotest_common.sh@930 -- # kill -0 79062 00:15:08.663 10:20:19 -- common/autotest_common.sh@931 -- # uname 00:15:08.663 10:20:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.663 10:20:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79062 00:15:08.663 10:20:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.663 10:20:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.663 killing process with pid 79062 00:15:08.663 10:20:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79062' 00:15:08.663 10:20:19 -- common/autotest_common.sh@945 -- # kill 79062 00:15:08.663 10:20:19 -- common/autotest_common.sh@950 -- # wait 79062 00:15:08.663 10:20:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:08.663 10:20:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:08.663 10:20:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:08.663 10:20:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.663 10:20:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:08.663 10:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.663 10:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.663 10:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.663 10:20:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:08.663 00:15:08.663 real 1m4.383s 00:15:08.663 user 3m53.237s 00:15:08.663 sys 0m21.349s 00:15:08.663 10:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.663 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 ************************************ 00:15:08.663 END TEST nvmf_initiator_timeout 00:15:08.663 ************************************ 00:15:08.663 10:20:20 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:15:08.663 10:20:20 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:08.663 10:20:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.663 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 10:20:20 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:08.663 10:20:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:08.663 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 10:20:20 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:08.663 10:20:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:08.663 10:20:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:08.663 10:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.663 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 ************************************ 00:15:08.663 START TEST nvmf_identify 00:15:08.663 ************************************ 00:15:08.663 10:20:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:08.663 * Looking for test storage... 00:15:08.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:08.663 10:20:20 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.663 10:20:20 -- nvmf/common.sh@7 -- # uname -s 00:15:08.663 10:20:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.663 10:20:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.663 10:20:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.663 10:20:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.663 10:20:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.663 10:20:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.663 10:20:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.663 10:20:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.663 10:20:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.663 10:20:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.663 10:20:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:15:08.663 10:20:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:15:08.663 10:20:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.663 10:20:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.663 10:20:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.663 10:20:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.663 10:20:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.663 10:20:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.663 10:20:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.663 10:20:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 10:20:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 10:20:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 10:20:20 -- paths/export.sh@5 -- # export PATH 00:15:08.663 10:20:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 10:20:20 -- nvmf/common.sh@46 -- # : 0 00:15:08.663 10:20:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.663 10:20:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.663 10:20:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.663 10:20:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.663 10:20:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.663 10:20:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.663 10:20:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.663 10:20:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.663 10:20:20 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.663 10:20:20 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.663 10:20:20 -- host/identify.sh@14 -- # nvmftestinit 00:15:08.663 10:20:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.663 10:20:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.663 10:20:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.663 10:20:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.664 10:20:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.664 10:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.664 10:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.664 10:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.664 10:20:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:08.664 10:20:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.664 10:20:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.664 10:20:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.664 10:20:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:08.664 10:20:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.664 10:20:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.664 10:20:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.664 10:20:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.664 10:20:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.664 10:20:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.664 10:20:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.664 10:20:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.664 10:20:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:08.664 10:20:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:08.664 Cannot find device "nvmf_tgt_br" 00:15:08.664 10:20:20 -- nvmf/common.sh@154 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.664 Cannot find device "nvmf_tgt_br2" 00:15:08.664 10:20:20 -- nvmf/common.sh@155 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:08.664 10:20:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:08.664 Cannot find device "nvmf_tgt_br" 00:15:08.664 10:20:20 -- nvmf/common.sh@157 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:08.664 Cannot find device "nvmf_tgt_br2" 00:15:08.664 10:20:20 -- nvmf/common.sh@158 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:08.664 10:20:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:08.664 10:20:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.664 10:20:20 -- nvmf/common.sh@161 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.664 10:20:20 -- nvmf/common.sh@162 -- # true 00:15:08.664 10:20:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.664 10:20:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.664 10:20:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.664 10:20:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.664 10:20:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.664 10:20:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.664 10:20:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.664 10:20:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.664 10:20:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:08.664 10:20:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:08.664 10:20:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:08.664 10:20:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:08.664 10:20:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:08.664 10:20:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.664 10:20:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.664 10:20:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.664 10:20:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:08.664 10:20:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:08.664 10:20:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.664 10:20:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.664 10:20:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.664 10:20:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.664 10:20:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.664 10:20:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:08.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:08.664 00:15:08.664 --- 10.0.0.2 ping statistics --- 00:15:08.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.664 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:08.664 10:20:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:08.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:08.664 00:15:08.664 --- 10.0.0.3 ping statistics --- 00:15:08.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.664 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:08.664 10:20:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:08.664 00:15:08.664 --- 10.0.0.1 ping statistics --- 00:15:08.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.664 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:08.664 10:20:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.664 10:20:20 -- nvmf/common.sh@421 -- # return 0 00:15:08.664 10:20:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.664 10:20:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.664 10:20:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.664 10:20:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.664 10:20:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.664 10:20:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.664 10:20:20 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:08.664 10:20:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:08.664 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 10:20:20 -- host/identify.sh@19 -- # nvmfpid=79989 00:15:08.664 10:20:20 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.664 10:20:20 -- host/identify.sh@23 -- # waitforlisten 79989 00:15:08.664 10:20:20 -- common/autotest_common.sh@819 -- # '[' -z 79989 ']' 00:15:08.664 10:20:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.664 10:20:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.664 10:20:20 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.664 10:20:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.664 10:20:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.664 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 [2024-07-26 10:20:20.827253] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:08.664 [2024-07-26 10:20:20.827322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.664 [2024-07-26 10:20:20.961069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.664 [2024-07-26 10:20:21.034601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.664 [2024-07-26 10:20:21.034779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.664 [2024-07-26 10:20:21.034795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.664 [2024-07-26 10:20:21.034804] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.664 [2024-07-26 10:20:21.035497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.664 [2024-07-26 10:20:21.035649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.664 [2024-07-26 10:20:21.035847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.664 [2024-07-26 10:20:21.035855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.664 10:20:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.664 10:20:21 -- common/autotest_common.sh@852 -- # return 0 00:15:08.664 10:20:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.664 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.664 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 [2024-07-26 10:20:21.842478] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.664 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.664 10:20:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:08.664 10:20:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.664 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 10:20:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.664 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.664 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 Malloc0 00:15:08.664 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.664 10:20:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.664 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.664 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.665 10:20:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:08.665 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.665 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.665 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.665 10:20:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.665 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.665 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.665 [2024-07-26 10:20:21.955212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.665 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.665 10:20:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:08.665 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.665 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.665 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.665 10:20:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:08.665 10:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.665 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.665 [2024-07-26 10:20:21.970998] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:08.665 [ 00:15:08.665 { 00:15:08.665 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.665 "subtype": "Discovery", 00:15:08.665 "listen_addresses": [ 00:15:08.665 { 00:15:08.665 "transport": "TCP", 00:15:08.665 "trtype": "TCP", 00:15:08.665 "adrfam": "IPv4", 00:15:08.665 "traddr": "10.0.0.2", 00:15:08.665 "trsvcid": "4420" 00:15:08.665 } 00:15:08.665 ], 00:15:08.665 "allow_any_host": true, 00:15:08.665 "hosts": [] 00:15:08.665 }, 00:15:08.665 { 00:15:08.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.665 "subtype": "NVMe", 00:15:08.665 "listen_addresses": [ 00:15:08.665 { 00:15:08.665 "transport": "TCP", 00:15:08.665 "trtype": "TCP", 00:15:08.665 "adrfam": "IPv4", 00:15:08.665 "traddr": "10.0.0.2", 00:15:08.665 "trsvcid": "4420" 00:15:08.665 } 00:15:08.665 ], 00:15:08.665 "allow_any_host": true, 00:15:08.665 "hosts": [], 00:15:08.665 "serial_number": "SPDK00000000000001", 00:15:08.665 "model_number": "SPDK bdev Controller", 00:15:08.665 "max_namespaces": 32, 00:15:08.665 "min_cntlid": 1, 00:15:08.665 "max_cntlid": 65519, 00:15:08.665 "namespaces": [ 00:15:08.665 { 00:15:08.665 "nsid": 1, 00:15:08.665 "bdev_name": "Malloc0", 00:15:08.665 "name": "Malloc0", 00:15:08.665 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:08.665 "eui64": "ABCDEF0123456789", 00:15:08.665 "uuid": "cff1b453-60f4-4795-a2bc-85eff5abd671" 00:15:08.665 } 00:15:08.665 ] 00:15:08.665 } 00:15:08.665 ] 00:15:08.665 10:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.665 10:20:21 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:08.665 [2024-07-26 10:20:22.008458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:08.665 [2024-07-26 10:20:22.008520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80024 ] 00:15:08.928 [2024-07-26 10:20:22.150387] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:08.928 [2024-07-26 10:20:22.150490] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:08.928 [2024-07-26 10:20:22.150499] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:08.928 [2024-07-26 10:20:22.150514] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:08.928 [2024-07-26 10:20:22.150529] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:08.928 [2024-07-26 10:20:22.150695] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:08.928 [2024-07-26 10:20:22.150761] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20216c0 0 00:15:08.928 [2024-07-26 10:20:22.157614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:08.928 [2024-07-26 10:20:22.157644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:08.928 [2024-07-26 10:20:22.157669] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:08.928 [2024-07-26 10:20:22.157673] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:08.928 [2024-07-26 10:20:22.157723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.157732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.157737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.928 [2024-07-26 10:20:22.157752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:08.928 [2024-07-26 10:20:22.157787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.928 [2024-07-26 10:20:22.164610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.928 [2024-07-26 10:20:22.164637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.928 [2024-07-26 10:20:22.164644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.928 [2024-07-26 10:20:22.164664] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:08.928 [2024-07-26 10:20:22.164673] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:08.928 [2024-07-26 10:20:22.164680] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:08.928 [2024-07-26 10:20:22.164699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.928 [2024-07-26 10:20:22.164721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.928 [2024-07-26 10:20:22.164753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.928 [2024-07-26 10:20:22.164824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.928 [2024-07-26 10:20:22.164834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.928 [2024-07-26 10:20:22.164838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.928 [2024-07-26 10:20:22.164851] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:08.928 [2024-07-26 10:20:22.164860] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:08.928 [2024-07-26 10:20:22.164870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.928 [2024-07-26 10:20:22.164888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.928 [2024-07-26 10:20:22.164911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.928 [2024-07-26 10:20:22.164967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.928 [2024-07-26 10:20:22.164976] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.928 [2024-07-26 10:20:22.164981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.164985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.928 [2024-07-26 10:20:22.164993] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:08.928 [2024-07-26 10:20:22.165003] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.928 [2024-07-26 10:20:22.165013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.165018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.928 [2024-07-26 10:20:22.165022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.928 [2024-07-26 10:20:22.165030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.928 [2024-07-26 10:20:22.165053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.165136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.165144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.165149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165154] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.165161] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.929 [2024-07-26 10:20:22.165174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.165193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.929 [2024-07-26 10:20:22.165215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.165283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.165292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.165296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.165307] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:08.929 [2024-07-26 10:20:22.165314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:08.929 [2024-07-26 10:20:22.165323] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.929 [2024-07-26 10:20:22.165430] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:08.929 [2024-07-26 10:20:22.165444] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.929 [2024-07-26 10:20:22.165466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.165484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.929 [2024-07-26 10:20:22.165507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.165571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.165598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.165603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.165616] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.929 [2024-07-26 10:20:22.165629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165635] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.165648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.929 [2024-07-26 10:20:22.165674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.165733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.165742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.165747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.165758] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.929 [2024-07-26 10:20:22.165764] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.165773] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:08.929 [2024-07-26 10:20:22.165791] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.165804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.165822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.929 [2024-07-26 10:20:22.165846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.165952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.929 [2024-07-26 10:20:22.165962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.929 [2024-07-26 10:20:22.165967] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165972] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20216c0): datao=0, datal=4096, cccid=0 00:15:08.929 [2024-07-26 10:20:22.165977] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2057f60) on tqpair(0x20216c0): expected_datao=0, payload_size=4096 00:15:08.929 [2024-07-26 10:20:22.165987] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.165993] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.166011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.166015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.166031] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:08.929 [2024-07-26 10:20:22.166037] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:08.929 [2024-07-26 10:20:22.166042] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:08.929 [2024-07-26 10:20:22.166048] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:08.929 [2024-07-26 10:20:22.166053] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:08.929 [2024-07-26 10:20:22.166069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.166099] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.166109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:08.929 [2024-07-26 10:20:22.166153] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.929 [2024-07-26 10:20:22.166222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.929 [2024-07-26 10:20:22.166230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.929 [2024-07-26 10:20:22.166235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2057f60) on tqpair=0x20216c0 00:15:08.929 [2024-07-26 10:20:22.166250] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.929 [2024-07-26 10:20:22.166274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.929 [2024-07-26 10:20:22.166296] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.929 [2024-07-26 10:20:22.166318] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166323] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.929 [2024-07-26 10:20:22.166339] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.166356] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.929 [2024-07-26 10:20:22.166366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.929 [2024-07-26 10:20:22.166374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20216c0) 00:15:08.929 [2024-07-26 10:20:22.166382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.930 [2024-07-26 10:20:22.166407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2057f60, cid 0, qid 0 00:15:08.930 [2024-07-26 10:20:22.166416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20580c0, cid 1, qid 0 00:15:08.930 [2024-07-26 10:20:22.166422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058220, cid 2, qid 0 00:15:08.930 [2024-07-26 10:20:22.166427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.930 [2024-07-26 10:20:22.166432] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20584e0, cid 4, qid 0 00:15:08.930 [2024-07-26 10:20:22.166597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.166608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.166613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20584e0) on tqpair=0x20216c0 00:15:08.930 [2024-07-26 10:20:22.166625] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:08.930 [2024-07-26 10:20:22.166632] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:08.930 [2024-07-26 10:20:22.166646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20216c0) 00:15:08.930 [2024-07-26 10:20:22.166666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.930 [2024-07-26 10:20:22.166691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20584e0, cid 4, qid 0 00:15:08.930 [2024-07-26 10:20:22.166762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.930 [2024-07-26 10:20:22.166771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.930 [2024-07-26 10:20:22.166775] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166780] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20216c0): datao=0, datal=4096, cccid=4 00:15:08.930 [2024-07-26 10:20:22.166785] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20584e0) on tqpair(0x20216c0): expected_datao=0, payload_size=4096 00:15:08.930 [2024-07-26 10:20:22.166794] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166799] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.166816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.166821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20584e0) on tqpair=0x20216c0 00:15:08.930 [2024-07-26 10:20:22.166841] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:08.930 [2024-07-26 10:20:22.166872] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20216c0) 00:15:08.930 [2024-07-26 10:20:22.166893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.930 [2024-07-26 10:20:22.166902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.166910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20216c0) 00:15:08.930 [2024-07-26 10:20:22.166917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.930 [2024-07-26 10:20:22.166947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20584e0, cid 4, qid 0 00:15:08.930 [2024-07-26 10:20:22.166957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058640, cid 5, qid 0 00:15:08.930 [2024-07-26 10:20:22.167092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.930 [2024-07-26 10:20:22.167101] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.930 [2024-07-26 10:20:22.167106] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167110] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20216c0): datao=0, datal=1024, cccid=4 00:15:08.930 [2024-07-26 10:20:22.167115] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20584e0) on tqpair(0x20216c0): expected_datao=0, payload_size=1024 00:15:08.930 [2024-07-26 10:20:22.167124] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167129] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.167142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.167146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058640) on tqpair=0x20216c0 00:15:08.930 [2024-07-26 10:20:22.167174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.167184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.167188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20584e0) on tqpair=0x20216c0 00:15:08.930 [2024-07-26 10:20:22.167215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20216c0) 00:15:08.930 [2024-07-26 10:20:22.167236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.930 [2024-07-26 10:20:22.167266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20584e0, cid 4, qid 0 00:15:08.930 [2024-07-26 10:20:22.167343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.930 [2024-07-26 10:20:22.167352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.930 [2024-07-26 10:20:22.167357] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167361] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20216c0): datao=0, datal=3072, cccid=4 00:15:08.930 [2024-07-26 10:20:22.167367] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20584e0) on tqpair(0x20216c0): expected_datao=0, payload_size=3072 00:15:08.930 [2024-07-26 10:20:22.167375] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167380] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.167398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.167402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20584e0) on tqpair=0x20216c0 00:15:08.930 [2024-07-26 10:20:22.167419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20216c0) 00:15:08.930 [2024-07-26 10:20:22.167437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.930 [2024-07-26 10:20:22.167467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20584e0, cid 4, qid 0 00:15:08.930 [2024-07-26 10:20:22.167545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.930 [2024-07-26 10:20:22.167554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.930 [2024-07-26 10:20:22.167559] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167563] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20216c0): datao=0, datal=8, cccid=4 00:15:08.930 [2024-07-26 10:20:22.167568] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20584e0) on tqpair(0x20216c0): expected_datao=0, payload_size=8 00:15:08.930 [2024-07-26 10:20:22.167591] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167598] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.930 [2024-07-26 10:20:22.167629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.930 [2024-07-26 10:20:22.167633] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.930 [2024-07-26 10:20:22.167638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20584e0) on tqpair=0x20216c0 00:15:08.930 ===================================================== 00:15:08.930 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:08.930 ===================================================== 00:15:08.930 Controller Capabilities/Features 00:15:08.930 ================================ 00:15:08.930 Vendor ID: 0000 00:15:08.930 Subsystem Vendor ID: 0000 00:15:08.930 Serial Number: .................... 00:15:08.930 Model Number: ........................................ 00:15:08.930 Firmware Version: 24.01.1 00:15:08.930 Recommended Arb Burst: 0 00:15:08.930 IEEE OUI Identifier: 00 00 00 00:15:08.930 Multi-path I/O 00:15:08.930 May have multiple subsystem ports: No 00:15:08.930 May have multiple controllers: No 00:15:08.930 Associated with SR-IOV VF: No 00:15:08.930 Max Data Transfer Size: 131072 00:15:08.930 Max Number of Namespaces: 0 00:15:08.930 Max Number of I/O Queues: 1024 00:15:08.930 NVMe Specification Version (VS): 1.3 00:15:08.930 NVMe Specification Version (Identify): 1.3 00:15:08.930 Maximum Queue Entries: 128 00:15:08.930 Contiguous Queues Required: Yes 00:15:08.930 Arbitration Mechanisms Supported 00:15:08.930 Weighted Round Robin: Not Supported 00:15:08.930 Vendor Specific: Not Supported 00:15:08.930 Reset Timeout: 15000 ms 00:15:08.930 Doorbell Stride: 4 bytes 00:15:08.930 NVM Subsystem Reset: Not Supported 00:15:08.930 Command Sets Supported 00:15:08.930 NVM Command Set: Supported 00:15:08.930 Boot Partition: Not Supported 00:15:08.930 Memory Page Size Minimum: 4096 bytes 00:15:08.930 Memory Page Size Maximum: 4096 bytes 00:15:08.931 Persistent Memory Region: Not Supported 00:15:08.931 Optional Asynchronous Events Supported 00:15:08.931 Namespace Attribute Notices: Not Supported 00:15:08.931 Firmware Activation Notices: Not Supported 00:15:08.931 ANA Change Notices: Not Supported 00:15:08.931 PLE Aggregate Log Change Notices: Not Supported 00:15:08.931 LBA Status Info Alert Notices: Not Supported 00:15:08.931 EGE Aggregate Log Change Notices: Not Supported 00:15:08.931 Normal NVM Subsystem Shutdown event: Not Supported 00:15:08.931 Zone Descriptor Change Notices: Not Supported 00:15:08.931 Discovery Log Change Notices: Supported 00:15:08.931 Controller Attributes 00:15:08.931 128-bit Host Identifier: Not Supported 00:15:08.931 Non-Operational Permissive Mode: Not Supported 00:15:08.931 NVM Sets: Not Supported 00:15:08.931 Read Recovery Levels: Not Supported 00:15:08.931 Endurance Groups: Not Supported 00:15:08.931 Predictable Latency Mode: Not Supported 00:15:08.931 Traffic Based Keep ALive: Not Supported 00:15:08.931 Namespace Granularity: Not Supported 00:15:08.931 SQ Associations: Not Supported 00:15:08.931 UUID List: Not Supported 00:15:08.931 Multi-Domain Subsystem: Not Supported 00:15:08.931 Fixed Capacity Management: Not Supported 00:15:08.931 Variable Capacity Management: Not Supported 00:15:08.931 Delete Endurance Group: Not Supported 00:15:08.931 Delete NVM Set: Not Supported 00:15:08.931 Extended LBA Formats Supported: Not Supported 00:15:08.931 Flexible Data Placement Supported: Not Supported 00:15:08.931 00:15:08.931 Controller Memory Buffer Support 00:15:08.931 ================================ 00:15:08.931 Supported: No 00:15:08.931 00:15:08.931 Persistent Memory Region Support 00:15:08.931 ================================ 00:15:08.931 Supported: No 00:15:08.931 00:15:08.931 Admin Command Set Attributes 00:15:08.931 ============================ 00:15:08.931 Security Send/Receive: Not Supported 00:15:08.931 Format NVM: Not Supported 00:15:08.931 Firmware Activate/Download: Not Supported 00:15:08.931 Namespace Management: Not Supported 00:15:08.931 Device Self-Test: Not Supported 00:15:08.931 Directives: Not Supported 00:15:08.931 NVMe-MI: Not Supported 00:15:08.931 Virtualization Management: Not Supported 00:15:08.931 Doorbell Buffer Config: Not Supported 00:15:08.931 Get LBA Status Capability: Not Supported 00:15:08.931 Command & Feature Lockdown Capability: Not Supported 00:15:08.931 Abort Command Limit: 1 00:15:08.931 Async Event Request Limit: 4 00:15:08.931 Number of Firmware Slots: N/A 00:15:08.931 Firmware Slot 1 Read-Only: N/A 00:15:08.931 Firmware Activation Without Reset: N/A 00:15:08.931 Multiple Update Detection Support: N/A 00:15:08.931 Firmware Update Granularity: No Information Provided 00:15:08.931 Per-Namespace SMART Log: No 00:15:08.931 Asymmetric Namespace Access Log Page: Not Supported 00:15:08.931 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:08.931 Command Effects Log Page: Not Supported 00:15:08.931 Get Log Page Extended Data: Supported 00:15:08.931 Telemetry Log Pages: Not Supported 00:15:08.931 Persistent Event Log Pages: Not Supported 00:15:08.931 Supported Log Pages Log Page: May Support 00:15:08.931 Commands Supported & Effects Log Page: Not Supported 00:15:08.931 Feature Identifiers & Effects Log Page:May Support 00:15:08.931 NVMe-MI Commands & Effects Log Page: May Support 00:15:08.931 Data Area 4 for Telemetry Log: Not Supported 00:15:08.931 Error Log Page Entries Supported: 128 00:15:08.931 Keep Alive: Not Supported 00:15:08.931 00:15:08.931 NVM Command Set Attributes 00:15:08.931 ========================== 00:15:08.931 Submission Queue Entry Size 00:15:08.931 Max: 1 00:15:08.931 Min: 1 00:15:08.931 Completion Queue Entry Size 00:15:08.931 Max: 1 00:15:08.931 Min: 1 00:15:08.931 Number of Namespaces: 0 00:15:08.931 Compare Command: Not Supported 00:15:08.931 Write Uncorrectable Command: Not Supported 00:15:08.931 Dataset Management Command: Not Supported 00:15:08.931 Write Zeroes Command: Not Supported 00:15:08.931 Set Features Save Field: Not Supported 00:15:08.931 Reservations: Not Supported 00:15:08.931 Timestamp: Not Supported 00:15:08.931 Copy: Not Supported 00:15:08.931 Volatile Write Cache: Not Present 00:15:08.931 Atomic Write Unit (Normal): 1 00:15:08.931 Atomic Write Unit (PFail): 1 00:15:08.931 Atomic Compare & Write Unit: 1 00:15:08.931 Fused Compare & Write: Supported 00:15:08.931 Scatter-Gather List 00:15:08.931 SGL Command Set: Supported 00:15:08.931 SGL Keyed: Supported 00:15:08.931 SGL Bit Bucket Descriptor: Not Supported 00:15:08.931 SGL Metadata Pointer: Not Supported 00:15:08.931 Oversized SGL: Not Supported 00:15:08.931 SGL Metadata Address: Not Supported 00:15:08.931 SGL Offset: Supported 00:15:08.931 Transport SGL Data Block: Not Supported 00:15:08.931 Replay Protected Memory Block: Not Supported 00:15:08.931 00:15:08.931 Firmware Slot Information 00:15:08.931 ========================= 00:15:08.931 Active slot: 0 00:15:08.931 00:15:08.931 00:15:08.931 Error Log 00:15:08.931 ========= 00:15:08.931 00:15:08.931 Active Namespaces 00:15:08.931 ================= 00:15:08.931 Discovery Log Page 00:15:08.931 ================== 00:15:08.931 Generation Counter: 2 00:15:08.931 Number of Records: 2 00:15:08.931 Record Format: 0 00:15:08.931 00:15:08.931 Discovery Log Entry 0 00:15:08.931 ---------------------- 00:15:08.931 Transport Type: 3 (TCP) 00:15:08.931 Address Family: 1 (IPv4) 00:15:08.931 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:08.931 Entry Flags: 00:15:08.931 Duplicate Returned Information: 1 00:15:08.931 Explicit Persistent Connection Support for Discovery: 1 00:15:08.931 Transport Requirements: 00:15:08.931 Secure Channel: Not Required 00:15:08.931 Port ID: 0 (0x0000) 00:15:08.931 Controller ID: 65535 (0xffff) 00:15:08.931 Admin Max SQ Size: 128 00:15:08.931 Transport Service Identifier: 4420 00:15:08.931 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:08.931 Transport Address: 10.0.0.2 00:15:08.931 Discovery Log Entry 1 00:15:08.931 ---------------------- 00:15:08.931 Transport Type: 3 (TCP) 00:15:08.931 Address Family: 1 (IPv4) 00:15:08.931 Subsystem Type: 2 (NVM Subsystem) 00:15:08.931 Entry Flags: 00:15:08.931 Duplicate Returned Information: 0 00:15:08.931 Explicit Persistent Connection Support for Discovery: 0 00:15:08.931 Transport Requirements: 00:15:08.931 Secure Channel: Not Required 00:15:08.931 Port ID: 0 (0x0000) 00:15:08.931 Controller ID: 65535 (0xffff) 00:15:08.931 Admin Max SQ Size: 128 00:15:08.931 Transport Service Identifier: 4420 00:15:08.931 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:08.931 Transport Address: 10.0.0.2 [2024-07-26 10:20:22.167782] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:08.931 [2024-07-26 10:20:22.167803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.931 [2024-07-26 10:20:22.167812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.931 [2024-07-26 10:20:22.167819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.931 [2024-07-26 10:20:22.167826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.931 [2024-07-26 10:20:22.167837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.931 [2024-07-26 10:20:22.167842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.931 [2024-07-26 10:20:22.167846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.931 [2024-07-26 10:20:22.167855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.931 [2024-07-26 10:20:22.167883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.931 [2024-07-26 10:20:22.167950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.931 [2024-07-26 10:20:22.167959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.931 [2024-07-26 10:20:22.167963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.167968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.167978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.167983] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.167987] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.932 [2024-07-26 10:20:22.167996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.168023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.932 [2024-07-26 10:20:22.168109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.168117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.168121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.168132] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:08.932 [2024-07-26 10:20:22.168138] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:08.932 [2024-07-26 10:20:22.168149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.932 [2024-07-26 10:20:22.168168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.168190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.932 [2024-07-26 10:20:22.168254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.168262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.168266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.168284] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.932 [2024-07-26 10:20:22.168303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.168325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.932 [2024-07-26 10:20:22.168391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.168399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.168404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168408] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.168421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.932 [2024-07-26 10:20:22.168447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.168469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.932 [2024-07-26 10:20:22.168528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.168537] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.168541] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.168559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.168569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20216c0) 00:15:08.932 [2024-07-26 10:20:22.171641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.171695] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2058380, cid 3, qid 0 00:15:08.932 [2024-07-26 10:20:22.171773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.171783] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.171788] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.171793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2058380) on tqpair=0x20216c0 00:15:08.932 [2024-07-26 10:20:22.171805] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 3 milliseconds 00:15:08.932 00:15:08.932 10:20:22 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:08.932 [2024-07-26 10:20:22.214332] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:08.932 [2024-07-26 10:20:22.214408] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80026 ] 00:15:08.932 [2024-07-26 10:20:22.356171] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:08.932 [2024-07-26 10:20:22.356247] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:08.932 [2024-07-26 10:20:22.356256] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:08.932 [2024-07-26 10:20:22.356269] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:08.932 [2024-07-26 10:20:22.356282] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:08.932 [2024-07-26 10:20:22.356409] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:08.932 [2024-07-26 10:20:22.356515] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18746c0 0 00:15:08.932 [2024-07-26 10:20:22.363707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:08.932 [2024-07-26 10:20:22.363790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:08.932 [2024-07-26 10:20:22.363799] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:08.932 [2024-07-26 10:20:22.363803] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:08.932 [2024-07-26 10:20:22.363852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.363870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.363875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.932 [2024-07-26 10:20:22.363889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:08.932 [2024-07-26 10:20:22.363925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.932 [2024-07-26 10:20:22.370635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.370658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.370664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.932 [2024-07-26 10:20:22.370700] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:08.932 [2024-07-26 10:20:22.370709] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:08.932 [2024-07-26 10:20:22.370716] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:08.932 [2024-07-26 10:20:22.370734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.932 [2024-07-26 10:20:22.370755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.370786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.932 [2024-07-26 10:20:22.370851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.370860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.370865] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.932 [2024-07-26 10:20:22.370877] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:08.932 [2024-07-26 10:20:22.370886] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:08.932 [2024-07-26 10:20:22.370896] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.370905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.932 [2024-07-26 10:20:22.370913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.932 [2024-07-26 10:20:22.370936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.932 [2024-07-26 10:20:22.371185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.932 [2024-07-26 10:20:22.371193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.932 [2024-07-26 10:20:22.371198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.932 [2024-07-26 10:20:22.371202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.932 [2024-07-26 10:20:22.371216] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:08.932 [2024-07-26 10:20:22.371226] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.371235] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.371240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.371244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.371253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.933 [2024-07-26 10:20:22.371275] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.371484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.371492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.371496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.371501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.933 [2024-07-26 10:20:22.371508] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.371530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.371536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.371540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.371549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.933 [2024-07-26 10:20:22.371585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.372099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.372119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.372125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.933 [2024-07-26 10:20:22.372136] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:08.933 [2024-07-26 10:20:22.372143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.372162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.372269] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:08.933 [2024-07-26 10:20:22.372274] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.372285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.372303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.933 [2024-07-26 10:20:22.372329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.372697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.372716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.372722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372727] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.933 [2024-07-26 10:20:22.372734] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.933 [2024-07-26 10:20:22.372748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.372767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.933 [2024-07-26 10:20:22.372792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.372852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.372860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.372864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.933 [2024-07-26 10:20:22.372875] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.933 [2024-07-26 10:20:22.372881] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:08.933 [2024-07-26 10:20:22.372890] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:08.933 [2024-07-26 10:20:22.372908] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.933 [2024-07-26 10:20:22.372920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.372929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.372938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.933 [2024-07-26 10:20:22.372962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.373422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.933 [2024-07-26 10:20:22.373440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.933 [2024-07-26 10:20:22.373446] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373450] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=4096, cccid=0 00:15:08.933 [2024-07-26 10:20:22.373456] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18aaf60) on tqpair(0x18746c0): expected_datao=0, payload_size=4096 00:15:08.933 [2024-07-26 10:20:22.373466] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373471] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.373489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.373493] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.933 [2024-07-26 10:20:22.373508] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:08.933 [2024-07-26 10:20:22.373514] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:08.933 [2024-07-26 10:20:22.373519] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:08.933 [2024-07-26 10:20:22.373524] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:08.933 [2024-07-26 10:20:22.373529] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:08.933 [2024-07-26 10:20:22.373535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:08.933 [2024-07-26 10:20:22.373551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.933 [2024-07-26 10:20:22.373562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.933 [2024-07-26 10:20:22.373596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:08.933 [2024-07-26 10:20:22.373628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.933 [2024-07-26 10:20:22.373806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.933 [2024-07-26 10:20:22.373814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.933 [2024-07-26 10:20:22.373819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.933 [2024-07-26 10:20:22.373823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18aaf60) on tqpair=0x18746c0 00:15:08.934 [2024-07-26 10:20:22.373833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.373850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.934 [2024-07-26 10:20:22.373857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.373872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.934 [2024-07-26 10:20:22.373879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.373894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.934 [2024-07-26 10:20:22.373901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.373916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.934 [2024-07-26 10:20:22.373922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.373938] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.373954] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.373963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.373971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.934 [2024-07-26 10:20:22.373996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18aaf60, cid 0, qid 0 00:15:08.934 [2024-07-26 10:20:22.374005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab0c0, cid 1, qid 0 00:15:08.934 [2024-07-26 10:20:22.374011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab220, cid 2, qid 0 00:15:08.934 [2024-07-26 10:20:22.374016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:08.934 [2024-07-26 10:20:22.374022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:08.934 [2024-07-26 10:20:22.374402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.934 [2024-07-26 10:20:22.374420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.934 [2024-07-26 10:20:22.374426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.374431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:08.934 [2024-07-26 10:20:22.374438] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:08.934 [2024-07-26 10:20:22.374445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.374456] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.374469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.374479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.374484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.374488] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.374496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:08.934 [2024-07-26 10:20:22.374521] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:08.934 [2024-07-26 10:20:22.378626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.934 [2024-07-26 10:20:22.378652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.934 [2024-07-26 10:20:22.378658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.378663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:08.934 [2024-07-26 10:20:22.378733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.378747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.378770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.378775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.378779] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.378792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.934 [2024-07-26 10:20:22.378822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:08.934 [2024-07-26 10:20:22.378903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.934 [2024-07-26 10:20:22.378912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.934 [2024-07-26 10:20:22.378916] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.378920] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=4096, cccid=4 00:15:08.934 [2024-07-26 10:20:22.378925] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab4e0) on tqpair(0x18746c0): expected_datao=0, payload_size=4096 00:15:08.934 [2024-07-26 10:20:22.378934] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.378939] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379256] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.934 [2024-07-26 10:20:22.379264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.934 [2024-07-26 10:20:22.379269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:08.934 [2024-07-26 10:20:22.379293] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:08.934 [2024-07-26 10:20:22.379307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.379320] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.379330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.379347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.934 [2024-07-26 10:20:22.379373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:08.934 [2024-07-26 10:20:22.379769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.934 [2024-07-26 10:20:22.379789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.934 [2024-07-26 10:20:22.379795] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379800] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=4096, cccid=4 00:15:08.934 [2024-07-26 10:20:22.379805] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab4e0) on tqpair(0x18746c0): expected_datao=0, payload_size=4096 00:15:08.934 [2024-07-26 10:20:22.379814] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379819] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.934 [2024-07-26 10:20:22.379836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.934 [2024-07-26 10:20:22.379840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379845] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:08.934 [2024-07-26 10:20:22.379865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.379880] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.934 [2024-07-26 10:20:22.379891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379896] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.379900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.934 [2024-07-26 10:20:22.379909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.934 [2024-07-26 10:20:22.379935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:08.934 [2024-07-26 10:20:22.380387] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:08.934 [2024-07-26 10:20:22.380407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:08.934 [2024-07-26 10:20:22.380413] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.380417] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=4096, cccid=4 00:15:08.934 [2024-07-26 10:20:22.380423] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab4e0) on tqpair(0x18746c0): expected_datao=0, payload_size=4096 00:15:08.934 [2024-07-26 10:20:22.380432] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.380436] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:08.934 [2024-07-26 10:20:22.380447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:08.934 [2024-07-26 10:20:22.380454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:08.934 [2024-07-26 10:20:22.380458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:08.935 [2024-07-26 10:20:22.380462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:08.935 [2024-07-26 10:20:22.380474] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380485] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380512] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380517] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.935 [2024-07-26 10:20:22.380523] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:08.935 [2024-07-26 10:20:22.380528] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:08.935 [2024-07-26 10:20:22.380546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.935 [2024-07-26 10:20:22.380553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.935 [2024-07-26 10:20:22.380557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:08.935 [2024-07-26 10:20:22.380565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.935 [2024-07-26 10:20:22.380594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:08.935 [2024-07-26 10:20:22.380601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:08.935 [2024-07-26 10:20:22.380605] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18746c0) 00:15:08.935 [2024-07-26 10:20:22.380612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.196 [2024-07-26 10:20:22.380644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:09.196 [2024-07-26 10:20:22.380654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab640, cid 5, qid 0 00:15:09.196 [2024-07-26 10:20:22.381098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.381117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.381123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.381137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.381143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.381148] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab640) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.381166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381172] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.381195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.381219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab640, cid 5, qid 0 00:15:09.196 [2024-07-26 10:20:22.381399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.381408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.381412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab640) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.381429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.381448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.381469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab640, cid 5, qid 0 00:15:09.196 [2024-07-26 10:20:22.381894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.381915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.381922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab640) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.381952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.381963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.381971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.381999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab640, cid 5, qid 0 00:15:09.196 [2024-07-26 10:20:22.382052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.382060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.382064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab640) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.382086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.382105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.382114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.382130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.382138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.382154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.382163] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382168] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.382172] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18746c0) 00:15:09.196 [2024-07-26 10:20:22.382188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.196 [2024-07-26 10:20:22.382213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab640, cid 5, qid 0 00:15:09.196 [2024-07-26 10:20:22.382221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab4e0, cid 4, qid 0 00:15:09.196 [2024-07-26 10:20:22.382227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab7a0, cid 6, qid 0 00:15:09.196 [2024-07-26 10:20:22.382232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab900, cid 7, qid 0 00:15:09.196 [2024-07-26 10:20:22.389667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:09.196 [2024-07-26 10:20:22.389694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:09.196 [2024-07-26 10:20:22.389700] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389704] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=8192, cccid=5 00:15:09.196 [2024-07-26 10:20:22.389710] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab640) on tqpair(0x18746c0): expected_datao=0, payload_size=8192 00:15:09.196 [2024-07-26 10:20:22.389720] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389725] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:09.196 [2024-07-26 10:20:22.389738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:09.196 [2024-07-26 10:20:22.389742] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389746] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=512, cccid=4 00:15:09.196 [2024-07-26 10:20:22.389751] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab4e0) on tqpair(0x18746c0): expected_datao=0, payload_size=512 00:15:09.196 [2024-07-26 10:20:22.389760] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389764] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:09.196 [2024-07-26 10:20:22.389776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:09.196 [2024-07-26 10:20:22.389781] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389785] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=512, cccid=6 00:15:09.196 [2024-07-26 10:20:22.389790] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab7a0) on tqpair(0x18746c0): expected_datao=0, payload_size=512 00:15:09.196 [2024-07-26 10:20:22.389797] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389801] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:09.196 [2024-07-26 10:20:22.389814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:09.196 [2024-07-26 10:20:22.389818] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389822] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18746c0): datao=0, datal=4096, cccid=7 00:15:09.196 [2024-07-26 10:20:22.389827] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ab900) on tqpair(0x18746c0): expected_datao=0, payload_size=4096 00:15:09.196 [2024-07-26 10:20:22.389835] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389839] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.389852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 [2024-07-26 10:20:22.389856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.196 [2024-07-26 10:20:22.389860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab640) on tqpair=0x18746c0 00:15:09.196 [2024-07-26 10:20:22.389883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.196 [2024-07-26 10:20:22.389893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.196 ===================================================== 00:15:09.196 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.196 ===================================================== 00:15:09.196 Controller Capabilities/Features 00:15:09.196 ================================ 00:15:09.196 Vendor ID: 8086 00:15:09.196 Subsystem Vendor ID: 8086 00:15:09.196 Serial Number: SPDK00000000000001 00:15:09.196 Model Number: SPDK bdev Controller 00:15:09.196 Firmware Version: 24.01.1 00:15:09.196 Recommended Arb Burst: 6 00:15:09.196 IEEE OUI Identifier: e4 d2 5c 00:15:09.196 Multi-path I/O 00:15:09.196 May have multiple subsystem ports: Yes 00:15:09.196 May have multiple controllers: Yes 00:15:09.196 Associated with SR-IOV VF: No 00:15:09.196 Max Data Transfer Size: 131072 00:15:09.196 Max Number of Namespaces: 32 00:15:09.196 Max Number of I/O Queues: 127 00:15:09.196 NVMe Specification Version (VS): 1.3 00:15:09.196 NVMe Specification Version (Identify): 1.3 00:15:09.196 Maximum Queue Entries: 128 00:15:09.196 Contiguous Queues Required: Yes 00:15:09.196 Arbitration Mechanisms Supported 00:15:09.196 Weighted Round Robin: Not Supported 00:15:09.196 Vendor Specific: Not Supported 00:15:09.196 Reset Timeout: 15000 ms 00:15:09.196 Doorbell Stride: 4 bytes 00:15:09.196 NVM Subsystem Reset: Not Supported 00:15:09.196 Command Sets Supported 00:15:09.196 NVM Command Set: Supported 00:15:09.196 Boot Partition: Not Supported 00:15:09.196 Memory Page Size Minimum: 4096 bytes 00:15:09.196 Memory Page Size Maximum: 4096 bytes 00:15:09.196 Persistent Memory Region: Not Supported 00:15:09.196 Optional Asynchronous Events Supported 00:15:09.196 Namespace Attribute Notices: Supported 00:15:09.196 Firmware Activation Notices: Not Supported 00:15:09.196 ANA Change Notices: Not Supported 00:15:09.197 PLE Aggregate Log Change Notices: Not Supported 00:15:09.197 LBA Status Info Alert Notices: Not Supported 00:15:09.197 EGE Aggregate Log Change Notices: Not Supported 00:15:09.197 Normal NVM Subsystem Shutdown event: Not Supported 00:15:09.197 Zone Descriptor Change Notices: Not Supported 00:15:09.197 Discovery Log Change Notices: Not Supported 00:15:09.197 Controller Attributes 00:15:09.197 128-bit Host Identifier: Supported 00:15:09.197 Non-Operational Permissive Mode: Not Supported 00:15:09.197 NVM Sets: Not Supported 00:15:09.197 Read Recovery Levels: Not Supported 00:15:09.197 Endurance Groups: Not Supported 00:15:09.197 Predictable Latency Mode: Not Supported 00:15:09.197 Traffic Based Keep ALive: Not Supported 00:15:09.197 Namespace Granularity: Not Supported 00:15:09.197 SQ Associations: Not Supported 00:15:09.197 UUID List: Not Supported 00:15:09.197 Multi-Domain Subsystem: Not Supported 00:15:09.197 Fixed Capacity Management: Not Supported 00:15:09.197 Variable Capacity Management: Not Supported 00:15:09.197 Delete Endurance Group: Not Supported 00:15:09.197 Delete NVM Set: Not Supported 00:15:09.197 Extended LBA Formats Supported: Not Supported 00:15:09.197 Flexible Data Placement Supported: Not Supported 00:15:09.197 00:15:09.197 Controller Memory Buffer Support 00:15:09.197 ================================ 00:15:09.197 Supported: No 00:15:09.197 00:15:09.197 Persistent Memory Region Support 00:15:09.197 ================================ 00:15:09.197 Supported: No 00:15:09.197 00:15:09.197 Admin Command Set Attributes 00:15:09.197 ============================ 00:15:09.197 Security Send/Receive: Not Supported 00:15:09.197 Format NVM: Not Supported 00:15:09.197 Firmware Activate/Download: Not Supported 00:15:09.197 Namespace Management: Not Supported 00:15:09.197 Device Self-Test: Not Supported 00:15:09.197 Directives: Not Supported 00:15:09.197 NVMe-MI: Not Supported 00:15:09.197 Virtualization Management: Not Supported 00:15:09.197 Doorbell Buffer Config: Not Supported 00:15:09.197 Get LBA Status Capability: Not Supported 00:15:09.197 Command & Feature Lockdown Capability: Not Supported 00:15:09.197 Abort Command Limit: 4 00:15:09.197 Async Event Request Limit: 4 00:15:09.197 Number of Firmware Slots: N/A 00:15:09.197 Firmware Slot 1 Read-Only: N/A 00:15:09.197 Firmware Activation Without Reset: [2024-07-26 10:20:22.389897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.389901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab4e0) on tqpair=0x18746c0 00:15:09.197 [2024-07-26 10:20:22.389917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.197 [2024-07-26 10:20:22.389925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.197 [2024-07-26 10:20:22.389929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.389934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab7a0) on tqpair=0x18746c0 00:15:09.197 [2024-07-26 10:20:22.389943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.197 [2024-07-26 10:20:22.389950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.197 [2024-07-26 10:20:22.389954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.389958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab900) on tqpair=0x18746c0 00:15:09.197 N/A 00:15:09.197 Multiple Update Detection Support: N/A 00:15:09.197 Firmware Update Granularity: No Information Provided 00:15:09.197 Per-Namespace SMART Log: No 00:15:09.197 Asymmetric Namespace Access Log Page: Not Supported 00:15:09.197 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:09.197 Command Effects Log Page: Supported 00:15:09.197 Get Log Page Extended Data: Supported 00:15:09.197 Telemetry Log Pages: Not Supported 00:15:09.197 Persistent Event Log Pages: Not Supported 00:15:09.197 Supported Log Pages Log Page: May Support 00:15:09.197 Commands Supported & Effects Log Page: Not Supported 00:15:09.197 Feature Identifiers & Effects Log Page:May Support 00:15:09.197 NVMe-MI Commands & Effects Log Page: May Support 00:15:09.197 Data Area 4 for Telemetry Log: Not Supported 00:15:09.197 Error Log Page Entries Supported: 128 00:15:09.197 Keep Alive: Supported 00:15:09.197 Keep Alive Granularity: 10000 ms 00:15:09.197 00:15:09.197 NVM Command Set Attributes 00:15:09.197 ========================== 00:15:09.197 Submission Queue Entry Size 00:15:09.197 Max: 64 00:15:09.197 Min: 64 00:15:09.197 Completion Queue Entry Size 00:15:09.197 Max: 16 00:15:09.197 Min: 16 00:15:09.197 Number of Namespaces: 32 00:15:09.197 Compare Command: Supported 00:15:09.197 Write Uncorrectable Command: Not Supported 00:15:09.197 Dataset Management Command: Supported 00:15:09.197 Write Zeroes Command: Supported 00:15:09.197 Set Features Save Field: Not Supported 00:15:09.197 Reservations: Supported 00:15:09.197 Timestamp: Not Supported 00:15:09.197 Copy: Supported 00:15:09.197 Volatile Write Cache: Present 00:15:09.197 Atomic Write Unit (Normal): 1 00:15:09.197 Atomic Write Unit (PFail): 1 00:15:09.197 Atomic Compare & Write Unit: 1 00:15:09.197 Fused Compare & Write: Supported 00:15:09.197 Scatter-Gather List 00:15:09.197 SGL Command Set: Supported 00:15:09.197 SGL Keyed: Supported 00:15:09.197 SGL Bit Bucket Descriptor: Not Supported 00:15:09.197 SGL Metadata Pointer: Not Supported 00:15:09.197 Oversized SGL: Not Supported 00:15:09.197 SGL Metadata Address: Not Supported 00:15:09.197 SGL Offset: Supported 00:15:09.197 Transport SGL Data Block: Not Supported 00:15:09.197 Replay Protected Memory Block: Not Supported 00:15:09.197 00:15:09.197 Firmware Slot Information 00:15:09.197 ========================= 00:15:09.197 Active slot: 1 00:15:09.197 Slot 1 Firmware Revision: 24.01.1 00:15:09.197 00:15:09.197 00:15:09.197 Commands Supported and Effects 00:15:09.197 ============================== 00:15:09.197 Admin Commands 00:15:09.197 -------------- 00:15:09.197 Get Log Page (02h): Supported 00:15:09.197 Identify (06h): Supported 00:15:09.197 Abort (08h): Supported 00:15:09.197 Set Features (09h): Supported 00:15:09.197 Get Features (0Ah): Supported 00:15:09.197 Asynchronous Event Request (0Ch): Supported 00:15:09.197 Keep Alive (18h): Supported 00:15:09.197 I/O Commands 00:15:09.197 ------------ 00:15:09.197 Flush (00h): Supported LBA-Change 00:15:09.197 Write (01h): Supported LBA-Change 00:15:09.197 Read (02h): Supported 00:15:09.197 Compare (05h): Supported 00:15:09.197 Write Zeroes (08h): Supported LBA-Change 00:15:09.197 Dataset Management (09h): Supported LBA-Change 00:15:09.197 Copy (19h): Supported LBA-Change 00:15:09.197 Unknown (79h): Supported LBA-Change 00:15:09.197 Unknown (7Ah): Supported 00:15:09.197 00:15:09.197 Error Log 00:15:09.197 ========= 00:15:09.197 00:15:09.197 Arbitration 00:15:09.197 =========== 00:15:09.197 Arbitration Burst: 1 00:15:09.197 00:15:09.197 Power Management 00:15:09.197 ================ 00:15:09.197 Number of Power States: 1 00:15:09.197 Current Power State: Power State #0 00:15:09.197 Power State #0: 00:15:09.197 Max Power: 0.00 W 00:15:09.197 Non-Operational State: Operational 00:15:09.197 Entry Latency: Not Reported 00:15:09.197 Exit Latency: Not Reported 00:15:09.197 Relative Read Throughput: 0 00:15:09.197 Relative Read Latency: 0 00:15:09.197 Relative Write Throughput: 0 00:15:09.197 Relative Write Latency: 0 00:15:09.197 Idle Power: Not Reported 00:15:09.197 Active Power: Not Reported 00:15:09.197 Non-Operational Permissive Mode: Not Supported 00:15:09.197 00:15:09.197 Health Information 00:15:09.197 ================== 00:15:09.197 Critical Warnings: 00:15:09.197 Available Spare Space: OK 00:15:09.197 Temperature: OK 00:15:09.197 Device Reliability: OK 00:15:09.197 Read Only: No 00:15:09.197 Volatile Memory Backup: OK 00:15:09.197 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:09.197 Temperature Threshold: [2024-07-26 10:20:22.390083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.390093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.390097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18746c0) 00:15:09.197 [2024-07-26 10:20:22.390107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.197 [2024-07-26 10:20:22.390139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab900, cid 7, qid 0 00:15:09.197 [2024-07-26 10:20:22.390459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.197 [2024-07-26 10:20:22.390478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.197 [2024-07-26 10:20:22.390485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.197 [2024-07-26 10:20:22.390489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab900) on tqpair=0x18746c0 00:15:09.197 [2024-07-26 10:20:22.390561] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:09.198 [2024-07-26 10:20:22.390612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.198 [2024-07-26 10:20:22.390623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.198 [2024-07-26 10:20:22.390630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.198 [2024-07-26 10:20:22.390636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.198 [2024-07-26 10:20:22.390647] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.390653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.390657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.390666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.390698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.390978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.390997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.391003] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.391019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.391037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.391065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.391308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.391325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.391331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.391343] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:09.198 [2024-07-26 10:20:22.391349] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:09.198 [2024-07-26 10:20:22.391361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.391380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.391403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.391638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.391656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.391662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.391681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.391704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.391728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.391925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.391939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.391945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.391964] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.391974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.391982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.392005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.392274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.392291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.392296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.392316] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392326] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.392334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.392357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.392567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.392596] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.392602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.392622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.392641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.392665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.392820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.392837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.392843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.392861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.392872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.392880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.392903] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.393262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.393277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.393283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.393302] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.393320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.393354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.393411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.393419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.393432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.393449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.393459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.393467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.393488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.396644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.396667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.396675] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.396680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.198 [2024-07-26 10:20:22.396696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.396703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:09.198 [2024-07-26 10:20:22.396707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18746c0) 00:15:09.198 [2024-07-26 10:20:22.396716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.198 [2024-07-26 10:20:22.396745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ab380, cid 3, qid 0 00:15:09.198 [2024-07-26 10:20:22.396811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:09.198 [2024-07-26 10:20:22.396820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:09.198 [2024-07-26 10:20:22.396825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:09.199 [2024-07-26 10:20:22.396829] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18ab380) on tqpair=0x18746c0 00:15:09.199 [2024-07-26 10:20:22.396840] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:15:09.199 0 Kelvin (-273 Celsius) 00:15:09.199 Available Spare: 0% 00:15:09.199 Available Spare Threshold: 0% 00:15:09.199 Life Percentage Used: 0% 00:15:09.199 Data Units Read: 0 00:15:09.199 Data Units Written: 0 00:15:09.199 Host Read Commands: 0 00:15:09.199 Host Write Commands: 0 00:15:09.199 Controller Busy Time: 0 minutes 00:15:09.199 Power Cycles: 0 00:15:09.199 Power On Hours: 0 hours 00:15:09.199 Unsafe Shutdowns: 0 00:15:09.199 Unrecoverable Media Errors: 0 00:15:09.199 Lifetime Error Log Entries: 0 00:15:09.199 Warning Temperature Time: 0 minutes 00:15:09.199 Critical Temperature Time: 0 minutes 00:15:09.199 00:15:09.199 Number of Queues 00:15:09.199 ================ 00:15:09.199 Number of I/O Submission Queues: 127 00:15:09.199 Number of I/O Completion Queues: 127 00:15:09.199 00:15:09.199 Active Namespaces 00:15:09.199 ================= 00:15:09.199 Namespace ID:1 00:15:09.199 Error Recovery Timeout: Unlimited 00:15:09.199 Command Set Identifier: NVM (00h) 00:15:09.199 Deallocate: Supported 00:15:09.199 Deallocated/Unwritten Error: Not Supported 00:15:09.199 Deallocated Read Value: Unknown 00:15:09.199 Deallocate in Write Zeroes: Not Supported 00:15:09.199 Deallocated Guard Field: 0xFFFF 00:15:09.199 Flush: Supported 00:15:09.199 Reservation: Supported 00:15:09.199 Namespace Sharing Capabilities: Multiple Controllers 00:15:09.199 Size (in LBAs): 131072 (0GiB) 00:15:09.199 Capacity (in LBAs): 131072 (0GiB) 00:15:09.199 Utilization (in LBAs): 131072 (0GiB) 00:15:09.199 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:09.199 EUI64: ABCDEF0123456789 00:15:09.199 UUID: cff1b453-60f4-4795-a2bc-85eff5abd671 00:15:09.199 Thin Provisioning: Not Supported 00:15:09.199 Per-NS Atomic Units: Yes 00:15:09.199 Atomic Boundary Size (Normal): 0 00:15:09.199 Atomic Boundary Size (PFail): 0 00:15:09.199 Atomic Boundary Offset: 0 00:15:09.199 Maximum Single Source Range Length: 65535 00:15:09.199 Maximum Copy Length: 65535 00:15:09.199 Maximum Source Range Count: 1 00:15:09.199 NGUID/EUI64 Never Reused: No 00:15:09.199 Namespace Write Protected: No 00:15:09.199 Number of LBA Formats: 1 00:15:09.199 Current LBA Format: LBA Format #00 00:15:09.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:09.199 00:15:09.199 10:20:22 -- host/identify.sh@51 -- # sync 00:15:09.199 10:20:22 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.199 10:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.199 10:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:09.199 10:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.199 10:20:22 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:09.199 10:20:22 -- host/identify.sh@56 -- # nvmftestfini 00:15:09.199 10:20:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:09.199 10:20:22 -- nvmf/common.sh@116 -- # sync 00:15:09.199 10:20:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:09.199 10:20:22 -- nvmf/common.sh@119 -- # set +e 00:15:09.199 10:20:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:09.199 10:20:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:09.199 rmmod nvme_tcp 00:15:09.199 rmmod nvme_fabrics 00:15:09.199 rmmod nvme_keyring 00:15:09.199 10:20:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:09.199 10:20:22 -- nvmf/common.sh@123 -- # set -e 00:15:09.199 10:20:22 -- nvmf/common.sh@124 -- # return 0 00:15:09.199 10:20:22 -- nvmf/common.sh@477 -- # '[' -n 79989 ']' 00:15:09.199 10:20:22 -- nvmf/common.sh@478 -- # killprocess 79989 00:15:09.199 10:20:22 -- common/autotest_common.sh@926 -- # '[' -z 79989 ']' 00:15:09.199 10:20:22 -- common/autotest_common.sh@930 -- # kill -0 79989 00:15:09.199 10:20:22 -- common/autotest_common.sh@931 -- # uname 00:15:09.199 10:20:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:09.199 10:20:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79989 00:15:09.199 10:20:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:09.199 10:20:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:09.199 killing process with pid 79989 00:15:09.199 10:20:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79989' 00:15:09.199 10:20:22 -- common/autotest_common.sh@945 -- # kill 79989 00:15:09.199 [2024-07-26 10:20:22.542897] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:09.199 10:20:22 -- common/autotest_common.sh@950 -- # wait 79989 00:15:09.458 10:20:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:09.458 10:20:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:09.458 10:20:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:09.458 10:20:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.458 10:20:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:09.458 10:20:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.458 10:20:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.458 10:20:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.458 10:20:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:09.458 00:15:09.458 real 0m2.550s 00:15:09.458 user 0m7.186s 00:15:09.458 sys 0m0.673s 00:15:09.458 10:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.458 ************************************ 00:15:09.458 END TEST nvmf_identify 00:15:09.458 ************************************ 00:15:09.458 10:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:09.458 10:20:22 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:09.458 10:20:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:09.458 10:20:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:09.458 10:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:09.718 ************************************ 00:15:09.718 START TEST nvmf_perf 00:15:09.718 ************************************ 00:15:09.718 10:20:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:09.718 * Looking for test storage... 00:15:09.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:09.718 10:20:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.718 10:20:22 -- nvmf/common.sh@7 -- # uname -s 00:15:09.718 10:20:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.718 10:20:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.718 10:20:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.718 10:20:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.718 10:20:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.718 10:20:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.718 10:20:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.718 10:20:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.718 10:20:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.718 10:20:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.718 10:20:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:15:09.718 10:20:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:15:09.718 10:20:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.718 10:20:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.718 10:20:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.718 10:20:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.718 10:20:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.718 10:20:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.718 10:20:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.718 10:20:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.718 10:20:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.718 10:20:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.718 10:20:22 -- paths/export.sh@5 -- # export PATH 00:15:09.718 10:20:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.718 10:20:22 -- nvmf/common.sh@46 -- # : 0 00:15:09.718 10:20:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:09.718 10:20:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:09.718 10:20:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:09.719 10:20:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.719 10:20:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.719 10:20:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:09.719 10:20:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:09.719 10:20:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:09.719 10:20:23 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:09.719 10:20:23 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:09.719 10:20:23 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.719 10:20:23 -- host/perf.sh@17 -- # nvmftestinit 00:15:09.719 10:20:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:09.719 10:20:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.719 10:20:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:09.719 10:20:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:09.719 10:20:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:09.719 10:20:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.719 10:20:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.719 10:20:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.719 10:20:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:09.719 10:20:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:09.719 10:20:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:09.719 10:20:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:09.719 10:20:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:09.719 10:20:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:09.719 10:20:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.719 10:20:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.719 10:20:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:09.719 10:20:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:09.719 10:20:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.719 10:20:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.719 10:20:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.719 10:20:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.719 10:20:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.719 10:20:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.719 10:20:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.719 10:20:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.719 10:20:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:09.719 10:20:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:09.719 Cannot find device "nvmf_tgt_br" 00:15:09.719 10:20:23 -- nvmf/common.sh@154 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.719 Cannot find device "nvmf_tgt_br2" 00:15:09.719 10:20:23 -- nvmf/common.sh@155 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:09.719 10:20:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:09.719 Cannot find device "nvmf_tgt_br" 00:15:09.719 10:20:23 -- nvmf/common.sh@157 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:09.719 Cannot find device "nvmf_tgt_br2" 00:15:09.719 10:20:23 -- nvmf/common.sh@158 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:09.719 10:20:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:09.719 10:20:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.719 10:20:23 -- nvmf/common.sh@161 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.719 10:20:23 -- nvmf/common.sh@162 -- # true 00:15:09.719 10:20:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.719 10:20:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.991 10:20:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.991 10:20:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.991 10:20:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.991 10:20:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.991 10:20:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.991 10:20:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:09.991 10:20:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:09.991 10:20:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:09.991 10:20:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:09.991 10:20:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:09.991 10:20:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:09.991 10:20:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.991 10:20:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.991 10:20:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.991 10:20:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:09.991 10:20:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:09.991 10:20:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.991 10:20:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.991 10:20:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.991 10:20:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.991 10:20:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.991 10:20:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:09.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:09.991 00:15:09.991 --- 10.0.0.2 ping statistics --- 00:15:09.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.991 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:09.991 10:20:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:09.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:09.991 00:15:09.991 --- 10.0.0.3 ping statistics --- 00:15:09.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.991 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:09.991 10:20:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:09.991 00:15:09.991 --- 10.0.0.1 ping statistics --- 00:15:09.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.991 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:09.991 10:20:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.991 10:20:23 -- nvmf/common.sh@421 -- # return 0 00:15:09.991 10:20:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:09.991 10:20:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.991 10:20:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:09.991 10:20:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:09.991 10:20:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.991 10:20:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:09.991 10:20:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:09.991 10:20:23 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:09.991 10:20:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:09.991 10:20:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:09.991 10:20:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.991 10:20:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.991 10:20:23 -- nvmf/common.sh@469 -- # nvmfpid=80189 00:15:09.991 10:20:23 -- nvmf/common.sh@470 -- # waitforlisten 80189 00:15:09.991 10:20:23 -- common/autotest_common.sh@819 -- # '[' -z 80189 ']' 00:15:09.991 10:20:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.991 10:20:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.991 10:20:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.991 10:20:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.991 10:20:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.991 [2024-07-26 10:20:23.439841] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:09.991 [2024-07-26 10:20:23.439936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.256 [2024-07-26 10:20:23.580456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.256 [2024-07-26 10:20:23.701054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.256 [2024-07-26 10:20:23.701237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.256 [2024-07-26 10:20:23.701254] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.256 [2024-07-26 10:20:23.701266] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.256 [2024-07-26 10:20:23.701453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.256 [2024-07-26 10:20:23.702107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.256 [2024-07-26 10:20:23.702322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.256 [2024-07-26 10:20:23.702332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.192 10:20:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.192 10:20:24 -- common/autotest_common.sh@852 -- # return 0 00:15:11.192 10:20:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.192 10:20:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:11.192 10:20:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.192 10:20:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.192 10:20:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:11.192 10:20:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:11.451 10:20:24 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:11.451 10:20:24 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:11.710 10:20:25 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:15:11.710 10:20:25 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:12.279 10:20:25 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:12.279 10:20:25 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:15:12.279 10:20:25 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:12.279 10:20:25 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:12.279 10:20:25 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:12.279 [2024-07-26 10:20:25.717959] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.537 10:20:25 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.537 10:20:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:12.537 10:20:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.105 10:20:26 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:13.105 10:20:26 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:13.105 10:20:26 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.364 [2024-07-26 10:20:26.781499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.364 10:20:26 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.623 10:20:27 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:13.623 10:20:27 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:13.623 10:20:27 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:13.623 10:20:27 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:15.001 Initializing NVMe Controllers 00:15:15.001 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:15.001 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:15.001 Initialization complete. Launching workers. 00:15:15.001 ======================================================== 00:15:15.001 Latency(us) 00:15:15.001 Device Information : IOPS MiB/s Average min max 00:15:15.001 PCIE (0000:00:06.0) NSID 1 from core 0: 21387.48 83.54 1495.17 268.17 8325.20 00:15:15.001 ======================================================== 00:15:15.001 Total : 21387.48 83.54 1495.17 268.17 8325.20 00:15:15.001 00:15:15.001 10:20:28 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:16.379 Initializing NVMe Controllers 00:15:16.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:16.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:16.379 Initialization complete. Launching workers. 00:15:16.379 ======================================================== 00:15:16.379 Latency(us) 00:15:16.379 Device Information : IOPS MiB/s Average min max 00:15:16.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2785.32 10.88 358.74 113.34 7351.58 00:15:16.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.88 0.49 8094.59 5933.20 12007.55 00:15:16.379 ======================================================== 00:15:16.379 Total : 2910.20 11.37 690.70 113.34 12007.55 00:15:16.379 00:15:16.379 10:20:29 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:17.754 Initializing NVMe Controllers 00:15:17.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:17.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:17.754 Initialization complete. Launching workers. 00:15:17.754 ======================================================== 00:15:17.754 Latency(us) 00:15:17.754 Device Information : IOPS MiB/s Average min max 00:15:17.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8161.68 31.88 3920.71 531.06 8145.76 00:15:17.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3732.71 14.58 8635.84 5972.24 16185.61 00:15:17.754 ======================================================== 00:15:17.754 Total : 11894.38 46.46 5400.42 531.06 16185.61 00:15:17.754 00:15:17.754 10:20:30 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:17.754 10:20:30 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:20.287 Initializing NVMe Controllers 00:15:20.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.287 Controller IO queue size 128, less than required. 00:15:20.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.287 Controller IO queue size 128, less than required. 00:15:20.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:20.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:20.287 Initialization complete. Launching workers. 00:15:20.287 ======================================================== 00:15:20.287 Latency(us) 00:15:20.287 Device Information : IOPS MiB/s Average min max 00:15:20.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1521.22 380.30 85866.93 67096.53 136041.22 00:15:20.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 639.04 159.76 210086.23 100653.08 340797.80 00:15:20.287 ======================================================== 00:15:20.287 Total : 2160.26 540.06 122613.11 67096.53 340797.80 00:15:20.287 00:15:20.287 10:20:33 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:20.287 No valid NVMe controllers or AIO or URING devices found 00:15:20.287 Initializing NVMe Controllers 00:15:20.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.287 Controller IO queue size 128, less than required. 00:15:20.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.287 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:20.287 Controller IO queue size 128, less than required. 00:15:20.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.287 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:20.287 WARNING: Some requested NVMe devices were skipped 00:15:20.287 10:20:33 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:22.820 Initializing NVMe Controllers 00:15:22.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:22.820 Controller IO queue size 128, less than required. 00:15:22.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:22.820 Controller IO queue size 128, less than required. 00:15:22.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:22.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:22.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:22.820 Initialization complete. Launching workers. 00:15:22.820 00:15:22.820 ==================== 00:15:22.820 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:22.820 TCP transport: 00:15:22.820 polls: 6405 00:15:22.820 idle_polls: 0 00:15:22.820 sock_completions: 6405 00:15:22.820 nvme_completions: 5161 00:15:22.820 submitted_requests: 7877 00:15:22.820 queued_requests: 1 00:15:22.820 00:15:22.820 ==================== 00:15:22.820 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:22.820 TCP transport: 00:15:22.820 polls: 6564 00:15:22.820 idle_polls: 0 00:15:22.820 sock_completions: 6564 00:15:22.820 nvme_completions: 5779 00:15:22.820 submitted_requests: 8835 00:15:22.820 queued_requests: 1 00:15:22.820 ======================================================== 00:15:22.820 Latency(us) 00:15:22.820 Device Information : IOPS MiB/s Average min max 00:15:22.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1351.26 337.82 96176.33 45319.49 169308.11 00:15:22.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1504.95 376.24 85457.52 43778.96 137658.39 00:15:22.820 ======================================================== 00:15:22.820 Total : 2856.21 714.05 90528.55 43778.96 169308.11 00:15:22.820 00:15:22.820 10:20:36 -- host/perf.sh@66 -- # sync 00:15:23.079 10:20:36 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.338 10:20:36 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:23.338 10:20:36 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:23.338 10:20:36 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:23.597 10:20:36 -- host/perf.sh@72 -- # ls_guid=b9f72e91-5f01-4c69-b514-3a84518734e2 00:15:23.597 10:20:36 -- host/perf.sh@73 -- # get_lvs_free_mb b9f72e91-5f01-4c69-b514-3a84518734e2 00:15:23.597 10:20:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b9f72e91-5f01-4c69-b514-3a84518734e2 00:15:23.597 10:20:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:23.597 10:20:36 -- common/autotest_common.sh@1345 -- # local fc 00:15:23.597 10:20:36 -- common/autotest_common.sh@1346 -- # local cs 00:15:23.597 10:20:36 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:23.855 10:20:37 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:23.855 { 00:15:23.855 "uuid": "b9f72e91-5f01-4c69-b514-3a84518734e2", 00:15:23.855 "name": "lvs_0", 00:15:23.855 "base_bdev": "Nvme0n1", 00:15:23.855 "total_data_clusters": 1278, 00:15:23.855 "free_clusters": 1278, 00:15:23.855 "block_size": 4096, 00:15:23.855 "cluster_size": 4194304 00:15:23.855 } 00:15:23.855 ]' 00:15:23.855 10:20:37 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b9f72e91-5f01-4c69-b514-3a84518734e2") .free_clusters' 00:15:23.855 10:20:37 -- common/autotest_common.sh@1348 -- # fc=1278 00:15:23.855 10:20:37 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b9f72e91-5f01-4c69-b514-3a84518734e2") .cluster_size' 00:15:23.855 5112 00:15:23.855 10:20:37 -- common/autotest_common.sh@1349 -- # cs=4194304 00:15:23.855 10:20:37 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:15:23.855 10:20:37 -- common/autotest_common.sh@1353 -- # echo 5112 00:15:23.855 10:20:37 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:23.855 10:20:37 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b9f72e91-5f01-4c69-b514-3a84518734e2 lbd_0 5112 00:15:24.114 10:20:37 -- host/perf.sh@80 -- # lb_guid=036f6df1-b59d-48af-9180-d89117e52bf4 00:15:24.114 10:20:37 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 036f6df1-b59d-48af-9180-d89117e52bf4 lvs_n_0 00:15:24.373 10:20:37 -- host/perf.sh@83 -- # ls_nested_guid=f2902088-d018-42be-a48f-01a7594fde8d 00:15:24.373 10:20:37 -- host/perf.sh@84 -- # get_lvs_free_mb f2902088-d018-42be-a48f-01a7594fde8d 00:15:24.373 10:20:37 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f2902088-d018-42be-a48f-01a7594fde8d 00:15:24.373 10:20:37 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:24.373 10:20:37 -- common/autotest_common.sh@1345 -- # local fc 00:15:24.373 10:20:37 -- common/autotest_common.sh@1346 -- # local cs 00:15:24.373 10:20:37 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:24.632 10:20:38 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:24.632 { 00:15:24.632 "uuid": "b9f72e91-5f01-4c69-b514-3a84518734e2", 00:15:24.632 "name": "lvs_0", 00:15:24.632 "base_bdev": "Nvme0n1", 00:15:24.632 "total_data_clusters": 1278, 00:15:24.632 "free_clusters": 0, 00:15:24.632 "block_size": 4096, 00:15:24.632 "cluster_size": 4194304 00:15:24.632 }, 00:15:24.632 { 00:15:24.632 "uuid": "f2902088-d018-42be-a48f-01a7594fde8d", 00:15:24.632 "name": "lvs_n_0", 00:15:24.632 "base_bdev": "036f6df1-b59d-48af-9180-d89117e52bf4", 00:15:24.632 "total_data_clusters": 1276, 00:15:24.632 "free_clusters": 1276, 00:15:24.632 "block_size": 4096, 00:15:24.632 "cluster_size": 4194304 00:15:24.632 } 00:15:24.632 ]' 00:15:24.632 10:20:38 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f2902088-d018-42be-a48f-01a7594fde8d") .free_clusters' 00:15:24.891 10:20:38 -- common/autotest_common.sh@1348 -- # fc=1276 00:15:24.891 10:20:38 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f2902088-d018-42be-a48f-01a7594fde8d") .cluster_size' 00:15:24.891 5104 00:15:24.891 10:20:38 -- common/autotest_common.sh@1349 -- # cs=4194304 00:15:24.891 10:20:38 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:15:24.891 10:20:38 -- common/autotest_common.sh@1353 -- # echo 5104 00:15:24.891 10:20:38 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:24.891 10:20:38 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f2902088-d018-42be-a48f-01a7594fde8d lbd_nest_0 5104 00:15:25.150 10:20:38 -- host/perf.sh@88 -- # lb_nested_guid=49448428-45f0-493a-b068-f8c373f0fd4b 00:15:25.150 10:20:38 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.408 10:20:38 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:25.408 10:20:38 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 49448428-45f0-493a-b068-f8c373f0fd4b 00:15:25.667 10:20:38 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.926 10:20:39 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:25.926 10:20:39 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:25.926 10:20:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:25.926 10:20:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:25.926 10:20:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:26.185 No valid NVMe controllers or AIO or URING devices found 00:15:26.185 Initializing NVMe Controllers 00:15:26.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.185 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:26.185 WARNING: Some requested NVMe devices were skipped 00:15:26.185 10:20:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:26.185 10:20:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:38.410 Initializing NVMe Controllers 00:15:38.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:38.410 Initialization complete. Launching workers. 00:15:38.410 ======================================================== 00:15:38.410 Latency(us) 00:15:38.410 Device Information : IOPS MiB/s Average min max 00:15:38.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 816.93 102.12 1223.39 350.56 7836.70 00:15:38.410 ======================================================== 00:15:38.410 Total : 816.93 102.12 1223.39 350.56 7836.70 00:15:38.410 00:15:38.410 10:20:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:38.410 10:20:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:38.410 10:20:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:38.410 No valid NVMe controllers or AIO or URING devices found 00:15:38.410 Initializing NVMe Controllers 00:15:38.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.410 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:38.410 WARNING: Some requested NVMe devices were skipped 00:15:38.410 10:20:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:38.410 10:20:50 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:48.382 Initializing NVMe Controllers 00:15:48.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.382 Initialization complete. Launching workers. 00:15:48.382 ======================================================== 00:15:48.382 Latency(us) 00:15:48.382 Device Information : IOPS MiB/s Average min max 00:15:48.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1327.80 165.97 24119.27 7113.62 64018.71 00:15:48.382 ======================================================== 00:15:48.382 Total : 1327.80 165.97 24119.27 7113.62 64018.71 00:15:48.382 00:15:48.382 10:21:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:48.382 10:21:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:48.382 10:21:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:48.382 No valid NVMe controllers or AIO or URING devices found 00:15:48.382 Initializing NVMe Controllers 00:15:48.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.382 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:48.382 WARNING: Some requested NVMe devices were skipped 00:15:48.382 10:21:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:48.382 10:21:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:58.357 Initializing NVMe Controllers 00:15:58.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:58.357 Controller IO queue size 128, less than required. 00:15:58.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:58.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:58.357 Initialization complete. Launching workers. 00:15:58.357 ======================================================== 00:15:58.357 Latency(us) 00:15:58.357 Device Information : IOPS MiB/s Average min max 00:15:58.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3678.08 459.76 34876.82 7195.37 74488.77 00:15:58.357 ======================================================== 00:15:58.357 Total : 3678.08 459.76 34876.82 7195.37 74488.77 00:15:58.357 00:15:58.357 10:21:10 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.357 10:21:11 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 49448428-45f0-493a-b068-f8c373f0fd4b 00:15:58.357 10:21:11 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:58.616 10:21:11 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 036f6df1-b59d-48af-9180-d89117e52bf4 00:15:58.874 10:21:12 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:59.133 10:21:12 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:59.133 10:21:12 -- host/perf.sh@114 -- # nvmftestfini 00:15:59.133 10:21:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:59.133 10:21:12 -- nvmf/common.sh@116 -- # sync 00:15:59.133 10:21:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:59.133 10:21:12 -- nvmf/common.sh@119 -- # set +e 00:15:59.133 10:21:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:59.133 10:21:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:59.133 rmmod nvme_tcp 00:15:59.133 rmmod nvme_fabrics 00:15:59.133 rmmod nvme_keyring 00:15:59.133 10:21:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:59.133 10:21:12 -- nvmf/common.sh@123 -- # set -e 00:15:59.133 10:21:12 -- nvmf/common.sh@124 -- # return 0 00:15:59.133 10:21:12 -- nvmf/common.sh@477 -- # '[' -n 80189 ']' 00:15:59.133 10:21:12 -- nvmf/common.sh@478 -- # killprocess 80189 00:15:59.133 10:21:12 -- common/autotest_common.sh@926 -- # '[' -z 80189 ']' 00:15:59.133 10:21:12 -- common/autotest_common.sh@930 -- # kill -0 80189 00:15:59.133 10:21:12 -- common/autotest_common.sh@931 -- # uname 00:15:59.133 10:21:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:59.133 10:21:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80189 00:15:59.133 killing process with pid 80189 00:15:59.133 10:21:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:59.133 10:21:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:59.133 10:21:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80189' 00:15:59.133 10:21:12 -- common/autotest_common.sh@945 -- # kill 80189 00:15:59.133 10:21:12 -- common/autotest_common.sh@950 -- # wait 80189 00:16:00.519 10:21:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:00.519 10:21:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:00.519 10:21:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:00.519 10:21:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.519 10:21:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:00.519 10:21:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.519 10:21:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.519 10:21:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.519 10:21:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:00.519 00:16:00.519 real 0m50.910s 00:16:00.519 user 3m11.988s 00:16:00.519 sys 0m12.881s 00:16:00.519 10:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.519 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:16:00.519 ************************************ 00:16:00.519 END TEST nvmf_perf 00:16:00.519 ************************************ 00:16:00.519 10:21:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:00.519 10:21:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:00.519 10:21:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:00.519 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:16:00.519 ************************************ 00:16:00.519 START TEST nvmf_fio_host 00:16:00.519 ************************************ 00:16:00.520 10:21:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:00.520 * Looking for test storage... 00:16:00.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:00.520 10:21:13 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.520 10:21:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.520 10:21:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.520 10:21:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.520 10:21:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.520 10:21:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.520 10:21:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.520 10:21:13 -- paths/export.sh@5 -- # export PATH 00:16:00.520 10:21:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.520 10:21:13 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:00.520 10:21:13 -- nvmf/common.sh@7 -- # uname -s 00:16:00.520 10:21:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.520 10:21:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.520 10:21:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.520 10:21:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.520 10:21:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.520 10:21:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.520 10:21:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.520 10:21:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.520 10:21:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.780 10:21:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.780 10:21:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:16:00.780 10:21:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:16:00.780 10:21:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.780 10:21:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.780 10:21:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:00.780 10:21:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.780 10:21:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.780 10:21:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.780 10:21:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.780 10:21:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.780 10:21:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.780 10:21:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.780 10:21:13 -- paths/export.sh@5 -- # export PATH 00:16:00.780 10:21:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.780 10:21:13 -- nvmf/common.sh@46 -- # : 0 00:16:00.780 10:21:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:00.780 10:21:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:00.780 10:21:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:00.780 10:21:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.780 10:21:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.780 10:21:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:00.780 10:21:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:00.780 10:21:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:00.780 10:21:13 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.780 10:21:13 -- host/fio.sh@14 -- # nvmftestinit 00:16:00.780 10:21:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:00.780 10:21:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.780 10:21:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:00.780 10:21:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:00.780 10:21:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:00.780 10:21:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.780 10:21:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.780 10:21:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.780 10:21:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:00.780 10:21:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:00.780 10:21:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:00.780 10:21:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:00.780 10:21:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:00.780 10:21:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:00.780 10:21:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.780 10:21:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.780 10:21:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:00.780 10:21:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:00.780 10:21:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.780 10:21:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.780 10:21:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.780 10:21:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.780 10:21:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.780 10:21:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.780 10:21:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.780 10:21:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.780 10:21:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:00.780 10:21:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:00.780 Cannot find device "nvmf_tgt_br" 00:16:00.780 10:21:14 -- nvmf/common.sh@154 -- # true 00:16:00.780 10:21:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.780 Cannot find device "nvmf_tgt_br2" 00:16:00.781 10:21:14 -- nvmf/common.sh@155 -- # true 00:16:00.781 10:21:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:00.781 10:21:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:00.781 Cannot find device "nvmf_tgt_br" 00:16:00.781 10:21:14 -- nvmf/common.sh@157 -- # true 00:16:00.781 10:21:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:00.781 Cannot find device "nvmf_tgt_br2" 00:16:00.781 10:21:14 -- nvmf/common.sh@158 -- # true 00:16:00.781 10:21:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:00.781 10:21:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:00.781 10:21:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.781 10:21:14 -- nvmf/common.sh@161 -- # true 00:16:00.781 10:21:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.781 10:21:14 -- nvmf/common.sh@162 -- # true 00:16:00.781 10:21:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.781 10:21:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.781 10:21:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.781 10:21:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.781 10:21:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.781 10:21:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.781 10:21:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.781 10:21:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.781 10:21:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.781 10:21:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:00.781 10:21:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:00.781 10:21:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:00.781 10:21:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:00.781 10:21:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.781 10:21:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.781 10:21:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.040 10:21:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:01.040 10:21:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:01.040 10:21:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.040 10:21:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.040 10:21:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.040 10:21:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.040 10:21:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.040 10:21:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:01.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:01.040 00:16:01.040 --- 10.0.0.2 ping statistics --- 00:16:01.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.040 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:01.040 10:21:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:01.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:01.040 00:16:01.040 --- 10.0.0.3 ping statistics --- 00:16:01.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.040 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:01.040 10:21:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:01.040 00:16:01.040 --- 10.0.0.1 ping statistics --- 00:16:01.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.040 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:01.040 10:21:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.040 10:21:14 -- nvmf/common.sh@421 -- # return 0 00:16:01.040 10:21:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:01.040 10:21:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.040 10:21:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:01.040 10:21:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:01.040 10:21:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.040 10:21:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:01.040 10:21:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:01.040 10:21:14 -- host/fio.sh@16 -- # [[ y != y ]] 00:16:01.040 10:21:14 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:01.040 10:21:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:01.040 10:21:14 -- common/autotest_common.sh@10 -- # set +x 00:16:01.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.040 10:21:14 -- host/fio.sh@24 -- # nvmfpid=81017 00:16:01.040 10:21:14 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.040 10:21:14 -- host/fio.sh@28 -- # waitforlisten 81017 00:16:01.040 10:21:14 -- common/autotest_common.sh@819 -- # '[' -z 81017 ']' 00:16:01.040 10:21:14 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.040 10:21:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.040 10:21:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:01.040 10:21:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.040 10:21:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:01.040 10:21:14 -- common/autotest_common.sh@10 -- # set +x 00:16:01.040 [2024-07-26 10:21:14.424993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:01.040 [2024-07-26 10:21:14.425147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.299 [2024-07-26 10:21:14.570097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.299 [2024-07-26 10:21:14.672677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:01.299 [2024-07-26 10:21:14.673127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.299 [2024-07-26 10:21:14.673194] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.299 [2024-07-26 10:21:14.673413] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.299 [2024-07-26 10:21:14.673610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.299 [2024-07-26 10:21:14.675035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.299 [2024-07-26 10:21:14.675236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.299 [2024-07-26 10:21:14.675242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.235 10:21:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:02.235 10:21:15 -- common/autotest_common.sh@852 -- # return 0 00:16:02.235 10:21:15 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.235 [2024-07-26 10:21:15.648247] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.235 10:21:15 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:02.235 10:21:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:02.235 10:21:15 -- common/autotest_common.sh@10 -- # set +x 00:16:02.494 10:21:15 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.753 Malloc1 00:16:02.753 10:21:16 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:03.011 10:21:16 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.270 10:21:16 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.528 [2024-07-26 10:21:16.813763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.528 10:21:16 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.787 10:21:17 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:03.787 10:21:17 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:03.787 10:21:17 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:03.787 10:21:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:03.787 10:21:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:03.787 10:21:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:03.787 10:21:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:03.787 10:21:17 -- common/autotest_common.sh@1320 -- # shift 00:16:03.787 10:21:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:03.787 10:21:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:03.787 10:21:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:03.787 10:21:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:16:03.787 10:21:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:03.787 10:21:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:03.787 10:21:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:03.787 10:21:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:04.045 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:04.045 fio-3.35 00:16:04.045 Starting 1 thread 00:16:06.577 00:16:06.577 test: (groupid=0, jobs=1): err= 0: pid=81099: Fri Jul 26 10:21:19 2024 00:16:06.577 read: IOPS=7726, BW=30.2MiB/s (31.6MB/s)(60.6MiB/2007msec) 00:16:06.577 slat (usec): min=2, max=381, avg= 2.71, stdev= 4.08 00:16:06.577 clat (usec): min=2630, max=14719, avg=8645.21, stdev=879.57 00:16:06.577 lat (usec): min=2672, max=14722, avg=8647.92, stdev=879.34 00:16:06.577 clat percentiles (usec): 00:16:06.577 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 8029], 00:16:06.577 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:16:06.577 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:16:06.577 | 99.00th=[11076], 99.50th=[12911], 99.90th=[14091], 99.95th=[14353], 00:16:06.577 | 99.99th=[14746] 00:16:06.577 bw ( KiB/s): min=29296, max=32592, per=99.92%, avg=30882.00, stdev=1350.37, samples=4 00:16:06.577 iops : min= 7324, max= 8148, avg=7720.50, stdev=337.59, samples=4 00:16:06.577 write: IOPS=7715, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2007msec); 0 zone resets 00:16:06.577 slat (usec): min=2, max=260, avg= 2.80, stdev= 2.95 00:16:06.577 clat (usec): min=2495, max=13845, avg=7870.91, stdev=818.00 00:16:06.577 lat (usec): min=2509, max=13944, avg=7873.71, stdev=817.93 00:16:06.577 clat percentiles (usec): 00:16:06.577 | 1.00th=[ 6128], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7308], 00:16:06.577 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:16:06.577 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:16:06.577 | 99.00th=[10028], 99.50th=[11994], 99.90th=[13435], 99.95th=[13698], 00:16:06.577 | 99.99th=[13829] 00:16:06.577 bw ( KiB/s): min=29760, max=32952, per=99.95%, avg=30848.00, stdev=1430.14, samples=4 00:16:06.577 iops : min= 7440, max= 8238, avg=7712.00, stdev=357.54, samples=4 00:16:06.577 lat (msec) : 4=0.08%, 10=97.57%, 20=2.35% 00:16:06.577 cpu : usr=71.98%, sys=20.39%, ctx=116, majf=0, minf=5 00:16:06.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:06.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.578 issued rwts: total=15507,15485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.578 00:16:06.578 Run status group 0 (all jobs): 00:16:06.578 READ: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=60.6MiB (63.5MB), run=2007-2007msec 00:16:06.578 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2007-2007msec 00:16:06.578 10:21:19 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:06.578 10:21:19 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:06.578 10:21:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:06.578 10:21:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.578 10:21:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:06.578 10:21:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:06.578 10:21:19 -- common/autotest_common.sh@1320 -- # shift 00:16:06.578 10:21:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:06.578 10:21:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:06.578 10:21:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:06.578 10:21:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:16:06.578 10:21:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:06.578 10:21:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:06.578 10:21:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:06.578 10:21:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:06.578 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:06.578 fio-3.35 00:16:06.578 Starting 1 thread 00:16:09.110 00:16:09.110 test: (groupid=0, jobs=1): err= 0: pid=81149: Fri Jul 26 10:21:22 2024 00:16:09.110 read: IOPS=8038, BW=126MiB/s (132MB/s)(252MiB/2007msec) 00:16:09.110 slat (usec): min=2, max=163, avg= 3.93, stdev= 2.37 00:16:09.110 clat (usec): min=3003, max=20014, avg=8811.57, stdev=2739.49 00:16:09.110 lat (usec): min=3006, max=20018, avg=8815.50, stdev=2739.56 00:16:09.110 clat percentiles (usec): 00:16:09.110 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 6325], 00:16:09.110 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9241], 00:16:09.110 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12649], 95.00th=[13960], 00:16:09.110 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17957], 99.95th=[18220], 00:16:09.110 | 99.99th=[18482] 00:16:09.110 bw ( KiB/s): min=57920, max=71680, per=50.19%, avg=64552.00, stdev=5935.19, samples=4 00:16:09.110 iops : min= 3620, max= 4480, avg=4034.50, stdev=370.95, samples=4 00:16:09.110 write: IOPS=4546, BW=71.0MiB/s (74.5MB/s)(132MiB/1861msec); 0 zone resets 00:16:09.110 slat (usec): min=33, max=195, avg=39.53, stdev= 6.63 00:16:09.110 clat (usec): min=5397, max=23788, avg=12714.06, stdev=2223.80 00:16:09.110 lat (usec): min=5435, max=23823, avg=12753.59, stdev=2224.24 00:16:09.110 clat percentiles (usec): 00:16:09.110 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10814], 00:16:09.111 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:16:09.111 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15664], 95.00th=[16909], 00:16:09.111 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:16:09.111 | 99.99th=[23725] 00:16:09.111 bw ( KiB/s): min=60608, max=73728, per=92.31%, avg=67152.00, stdev=5566.50, samples=4 00:16:09.111 iops : min= 3788, max= 4608, avg=4197.00, stdev=347.91, samples=4 00:16:09.111 lat (msec) : 4=0.33%, 10=47.82%, 20=51.70%, 50=0.15% 00:16:09.111 cpu : usr=81.21%, sys=14.11%, ctx=16, majf=0, minf=1 00:16:09.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.111 issued rwts: total=16133,8461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.111 00:16:09.111 Run status group 0 (all jobs): 00:16:09.111 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2007-2007msec 00:16:09.111 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=132MiB (139MB), run=1861-1861msec 00:16:09.111 10:21:22 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.111 10:21:22 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:16:09.111 10:21:22 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:16:09.111 10:21:22 -- host/fio.sh@51 -- # get_nvme_bdfs 00:16:09.111 10:21:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:09.111 10:21:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:16:09.111 10:21:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:09.111 10:21:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.111 10:21:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:09.111 10:21:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:16:09.111 10:21:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:09.111 10:21:22 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:16:09.370 Nvme0n1 00:16:09.370 10:21:22 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:16:09.629 10:21:22 -- host/fio.sh@53 -- # ls_guid=dbe75dfb-0878-4028-bffb-a93733697a0a 00:16:09.629 10:21:22 -- host/fio.sh@54 -- # get_lvs_free_mb dbe75dfb-0878-4028-bffb-a93733697a0a 00:16:09.629 10:21:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=dbe75dfb-0878-4028-bffb-a93733697a0a 00:16:09.629 10:21:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:16:09.629 10:21:22 -- common/autotest_common.sh@1345 -- # local fc 00:16:09.629 10:21:22 -- common/autotest_common.sh@1346 -- # local cs 00:16:09.629 10:21:22 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:09.888 10:21:23 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:16:09.888 { 00:16:09.888 "uuid": "dbe75dfb-0878-4028-bffb-a93733697a0a", 00:16:09.888 "name": "lvs_0", 00:16:09.888 "base_bdev": "Nvme0n1", 00:16:09.888 "total_data_clusters": 4, 00:16:09.888 "free_clusters": 4, 00:16:09.888 "block_size": 4096, 00:16:09.888 "cluster_size": 1073741824 00:16:09.888 } 00:16:09.888 ]' 00:16:09.888 10:21:23 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="dbe75dfb-0878-4028-bffb-a93733697a0a") .free_clusters' 00:16:09.888 10:21:23 -- common/autotest_common.sh@1348 -- # fc=4 00:16:09.888 10:21:23 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="dbe75dfb-0878-4028-bffb-a93733697a0a") .cluster_size' 00:16:09.888 4096 00:16:09.888 10:21:23 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:16:09.888 10:21:23 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:16:09.888 10:21:23 -- common/autotest_common.sh@1353 -- # echo 4096 00:16:09.888 10:21:23 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:16:10.147 72d6e9a2-374d-4efb-8e3d-6242a36053b0 00:16:10.147 10:21:23 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:16:10.405 10:21:23 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:16:10.664 10:21:23 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:10.923 10:21:24 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:10.923 10:21:24 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:10.923 10:21:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:10.923 10:21:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:10.923 10:21:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:10.923 10:21:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:10.923 10:21:24 -- common/autotest_common.sh@1320 -- # shift 00:16:10.923 10:21:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:10.923 10:21:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:10.923 10:21:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:10.923 10:21:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:10.923 10:21:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:10.923 10:21:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:10.923 10:21:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:10.923 10:21:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:10.923 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:10.923 fio-3.35 00:16:10.923 Starting 1 thread 00:16:13.457 00:16:13.457 test: (groupid=0, jobs=1): err= 0: pid=81252: Fri Jul 26 10:21:26 2024 00:16:13.457 read: IOPS=6342, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2010msec) 00:16:13.457 slat (usec): min=2, max=322, avg= 2.68, stdev= 4.07 00:16:13.457 clat (usec): min=3199, max=20051, avg=10531.98, stdev=890.17 00:16:13.457 lat (usec): min=3209, max=20053, avg=10534.66, stdev=889.92 00:16:13.457 clat percentiles (usec): 00:16:13.457 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:13.457 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:16:13.457 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:16:13.457 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16581], 99.95th=[18744], 00:16:13.457 | 99.99th=[19006] 00:16:13.457 bw ( KiB/s): min=24351, max=25904, per=99.94%, avg=25357.75, stdev=700.42, samples=4 00:16:13.457 iops : min= 6087, max= 6476, avg=6339.25, stdev=175.46, samples=4 00:16:13.457 write: IOPS=6339, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2010msec); 0 zone resets 00:16:13.457 slat (usec): min=2, max=304, avg= 2.73, stdev= 3.22 00:16:13.457 clat (usec): min=2488, max=20072, avg=9552.32, stdev=865.73 00:16:13.457 lat (usec): min=2502, max=20074, avg=9555.05, stdev=865.63 00:16:13.457 clat percentiles (usec): 00:16:13.457 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:16:13.457 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:16:13.457 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:16:13.457 | 99.00th=[11338], 99.50th=[11731], 99.90th=[17433], 99.95th=[17695], 00:16:13.457 | 99.99th=[20055] 00:16:13.457 bw ( KiB/s): min=25104, max=25520, per=99.95%, avg=25345.25, stdev=178.93, samples=4 00:16:13.457 iops : min= 6276, max= 6380, avg=6336.25, stdev=44.69, samples=4 00:16:13.457 lat (msec) : 4=0.05%, 10=49.46%, 20=50.48%, 50=0.01% 00:16:13.457 cpu : usr=74.17%, sys=19.96%, ctx=7, majf=0, minf=5 00:16:13.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:13.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.457 issued rwts: total=12749,12742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.457 00:16:13.457 Run status group 0 (all jobs): 00:16:13.457 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2010-2010msec 00:16:13.457 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2010-2010msec 00:16:13.457 10:21:26 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:13.717 10:21:26 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:13.717 10:21:27 -- host/fio.sh@64 -- # ls_nested_guid=2eb7319c-ad5f-4f21-a0c1-45dd9959da47 00:16:13.717 10:21:27 -- host/fio.sh@65 -- # get_lvs_free_mb 2eb7319c-ad5f-4f21-a0c1-45dd9959da47 00:16:13.717 10:21:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2eb7319c-ad5f-4f21-a0c1-45dd9959da47 00:16:13.717 10:21:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:16:13.717 10:21:27 -- common/autotest_common.sh@1345 -- # local fc 00:16:13.717 10:21:27 -- common/autotest_common.sh@1346 -- # local cs 00:16:13.717 10:21:27 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:14.285 10:21:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:16:14.285 { 00:16:14.285 "uuid": "dbe75dfb-0878-4028-bffb-a93733697a0a", 00:16:14.285 "name": "lvs_0", 00:16:14.285 "base_bdev": "Nvme0n1", 00:16:14.285 "total_data_clusters": 4, 00:16:14.285 "free_clusters": 0, 00:16:14.285 "block_size": 4096, 00:16:14.285 "cluster_size": 1073741824 00:16:14.285 }, 00:16:14.285 { 00:16:14.285 "uuid": "2eb7319c-ad5f-4f21-a0c1-45dd9959da47", 00:16:14.285 "name": "lvs_n_0", 00:16:14.285 "base_bdev": "72d6e9a2-374d-4efb-8e3d-6242a36053b0", 00:16:14.285 "total_data_clusters": 1022, 00:16:14.285 "free_clusters": 1022, 00:16:14.285 "block_size": 4096, 00:16:14.285 "cluster_size": 4194304 00:16:14.285 } 00:16:14.285 ]' 00:16:14.285 10:21:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2eb7319c-ad5f-4f21-a0c1-45dd9959da47") .free_clusters' 00:16:14.285 10:21:27 -- common/autotest_common.sh@1348 -- # fc=1022 00:16:14.285 10:21:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2eb7319c-ad5f-4f21-a0c1-45dd9959da47") .cluster_size' 00:16:14.285 4088 00:16:14.285 10:21:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:16:14.285 10:21:27 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:16:14.285 10:21:27 -- common/autotest_common.sh@1353 -- # echo 4088 00:16:14.285 10:21:27 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:14.544 5189aa6c-1a6c-4ce8-9d03-3fbc1a1e15af 00:16:14.544 10:21:27 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:14.803 10:21:28 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:15.061 10:21:28 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:15.319 10:21:28 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:15.319 10:21:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:15.319 10:21:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:15.319 10:21:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:15.319 10:21:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:15.319 10:21:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:15.319 10:21:28 -- common/autotest_common.sh@1320 -- # shift 00:16:15.319 10:21:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:15.319 10:21:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:15.319 10:21:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:15.319 10:21:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:15.319 10:21:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:15.319 10:21:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:15.320 10:21:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:15.320 10:21:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:15.320 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:15.320 fio-3.35 00:16:15.320 Starting 1 thread 00:16:17.863 00:16:17.863 test: (groupid=0, jobs=1): err= 0: pid=81336: Fri Jul 26 10:21:31 2024 00:16:17.863 read: IOPS=5107, BW=20.0MiB/s (20.9MB/s)(40.1MiB/2008msec) 00:16:17.863 slat (nsec): min=1806, max=424211, avg=3050.52, stdev=5609.52 00:16:17.863 clat (usec): min=4050, max=22504, avg=13140.80, stdev=1272.23 00:16:17.863 lat (usec): min=4060, max=22506, avg=13143.85, stdev=1271.79 00:16:17.863 clat percentiles (usec): 00:16:17.863 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:16:17.863 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:16:17.863 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:16:17.863 | 99.00th=[16581], 99.50th=[17171], 99.90th=[21103], 99.95th=[21365], 00:16:17.863 | 99.99th=[22414] 00:16:17.863 bw ( KiB/s): min=19896, max=21008, per=99.63%, avg=20354.00, stdev=526.60, samples=4 00:16:17.863 iops : min= 4974, max= 5252, avg=5088.50, stdev=131.65, samples=4 00:16:17.863 write: IOPS=5088, BW=19.9MiB/s (20.8MB/s)(39.9MiB/2008msec); 0 zone resets 00:16:17.863 slat (nsec): min=1908, max=304344, avg=3128.50, stdev=3946.60 00:16:17.863 clat (usec): min=2809, max=19646, avg=11875.03, stdev=1122.65 00:16:17.863 lat (usec): min=2832, max=19649, avg=11878.15, stdev=1122.41 00:16:17.863 clat percentiles (usec): 00:16:17.863 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:16:17.863 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:16:17.863 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13698], 00:16:17.863 | 99.00th=[14615], 99.50th=[15139], 99.90th=[17695], 99.95th=[19268], 00:16:17.863 | 99.99th=[19530] 00:16:17.863 bw ( KiB/s): min=19896, max=20808, per=99.93%, avg=20338.00, stdev=393.84, samples=4 00:16:17.863 iops : min= 4974, max= 5202, avg=5084.50, stdev=98.46, samples=4 00:16:17.863 lat (msec) : 4=0.01%, 10=1.69%, 20=98.22%, 50=0.08% 00:16:17.863 cpu : usr=70.10%, sys=23.87%, ctx=4, majf=0, minf=5 00:16:17.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:17.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.863 issued rwts: total=10256,10217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.863 00:16:17.863 Run status group 0 (all jobs): 00:16:17.863 READ: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=40.1MiB (42.0MB), run=2008-2008msec 00:16:17.863 WRITE: bw=19.9MiB/s (20.8MB/s), 19.9MiB/s-19.9MiB/s (20.8MB/s-20.8MB/s), io=39.9MiB (41.8MB), run=2008-2008msec 00:16:17.863 10:21:31 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:18.122 10:21:31 -- host/fio.sh@74 -- # sync 00:16:18.122 10:21:31 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:18.380 10:21:31 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:18.639 10:21:31 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:18.897 10:21:32 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:19.155 10:21:32 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:20.101 10:21:33 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:20.101 10:21:33 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:20.101 10:21:33 -- host/fio.sh@86 -- # nvmftestfini 00:16:20.101 10:21:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:20.101 10:21:33 -- nvmf/common.sh@116 -- # sync 00:16:20.101 10:21:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:20.101 10:21:33 -- nvmf/common.sh@119 -- # set +e 00:16:20.101 10:21:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:20.101 10:21:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:20.101 rmmod nvme_tcp 00:16:20.101 rmmod nvme_fabrics 00:16:20.101 rmmod nvme_keyring 00:16:20.101 10:21:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:20.101 10:21:33 -- nvmf/common.sh@123 -- # set -e 00:16:20.101 10:21:33 -- nvmf/common.sh@124 -- # return 0 00:16:20.101 10:21:33 -- nvmf/common.sh@477 -- # '[' -n 81017 ']' 00:16:20.101 10:21:33 -- nvmf/common.sh@478 -- # killprocess 81017 00:16:20.101 10:21:33 -- common/autotest_common.sh@926 -- # '[' -z 81017 ']' 00:16:20.101 10:21:33 -- common/autotest_common.sh@930 -- # kill -0 81017 00:16:20.101 10:21:33 -- common/autotest_common.sh@931 -- # uname 00:16:20.101 10:21:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:20.101 10:21:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81017 00:16:20.101 killing process with pid 81017 00:16:20.101 10:21:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:20.101 10:21:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:20.101 10:21:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81017' 00:16:20.101 10:21:33 -- common/autotest_common.sh@945 -- # kill 81017 00:16:20.101 10:21:33 -- common/autotest_common.sh@950 -- # wait 81017 00:16:20.359 10:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:20.359 10:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:20.359 10:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:20.359 10:21:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.359 10:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:20.359 10:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.359 10:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.360 10:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.360 10:21:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:20.360 ************************************ 00:16:20.360 END TEST nvmf_fio_host 00:16:20.360 ************************************ 00:16:20.360 00:16:20.360 real 0m19.739s 00:16:20.360 user 1m26.809s 00:16:20.360 sys 0m4.442s 00:16:20.360 10:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.360 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:20.360 10:21:33 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:20.360 10:21:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:20.360 10:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:20.360 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:20.360 ************************************ 00:16:20.360 START TEST nvmf_failover 00:16:20.360 ************************************ 00:16:20.360 10:21:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:20.360 * Looking for test storage... 00:16:20.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:20.360 10:21:33 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.360 10:21:33 -- nvmf/common.sh@7 -- # uname -s 00:16:20.360 10:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.360 10:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.360 10:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.360 10:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.360 10:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.360 10:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.360 10:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.360 10:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.360 10:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.360 10:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:16:20.360 10:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:16:20.360 10:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.360 10:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.360 10:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.360 10:21:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.360 10:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.360 10:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.360 10:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.360 10:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.360 10:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.360 10:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.360 10:21:33 -- paths/export.sh@5 -- # export PATH 00:16:20.360 10:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.360 10:21:33 -- nvmf/common.sh@46 -- # : 0 00:16:20.360 10:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.360 10:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.360 10:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.360 10:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.360 10:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.360 10:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.360 10:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.360 10:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.360 10:21:33 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.360 10:21:33 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.360 10:21:33 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.360 10:21:33 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.360 10:21:33 -- host/failover.sh@18 -- # nvmftestinit 00:16:20.360 10:21:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:20.360 10:21:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.360 10:21:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.360 10:21:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.360 10:21:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.360 10:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.360 10:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.360 10:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.360 10:21:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:20.360 10:21:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:20.360 10:21:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.360 10:21:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.360 10:21:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.360 10:21:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:20.360 10:21:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.360 10:21:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.360 10:21:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.360 10:21:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.360 10:21:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.360 10:21:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.360 10:21:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.360 10:21:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.360 10:21:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:20.360 10:21:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:20.618 Cannot find device "nvmf_tgt_br" 00:16:20.618 10:21:33 -- nvmf/common.sh@154 -- # true 00:16:20.618 10:21:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.618 Cannot find device "nvmf_tgt_br2" 00:16:20.618 10:21:33 -- nvmf/common.sh@155 -- # true 00:16:20.618 10:21:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:20.618 10:21:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:20.618 Cannot find device "nvmf_tgt_br" 00:16:20.618 10:21:33 -- nvmf/common.sh@157 -- # true 00:16:20.618 10:21:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:20.618 Cannot find device "nvmf_tgt_br2" 00:16:20.618 10:21:33 -- nvmf/common.sh@158 -- # true 00:16:20.618 10:21:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:20.618 10:21:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:20.619 10:21:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.619 10:21:33 -- nvmf/common.sh@161 -- # true 00:16:20.619 10:21:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.619 10:21:33 -- nvmf/common.sh@162 -- # true 00:16:20.619 10:21:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.619 10:21:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.619 10:21:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.619 10:21:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.619 10:21:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.619 10:21:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.619 10:21:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.619 10:21:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.619 10:21:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.619 10:21:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:20.619 10:21:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:20.619 10:21:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:20.619 10:21:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:20.619 10:21:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.619 10:21:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.619 10:21:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.619 10:21:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:20.619 10:21:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:20.619 10:21:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.619 10:21:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.619 10:21:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.876 10:21:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.876 10:21:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.876 10:21:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:20.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:20.877 00:16:20.877 --- 10.0.0.2 ping statistics --- 00:16:20.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.877 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:20.877 10:21:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:20.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:20.877 00:16:20.877 --- 10.0.0.3 ping statistics --- 00:16:20.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.877 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:20.877 10:21:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:20.877 00:16:20.877 --- 10.0.0.1 ping statistics --- 00:16:20.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.877 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:20.877 10:21:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.877 10:21:34 -- nvmf/common.sh@421 -- # return 0 00:16:20.877 10:21:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.877 10:21:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.877 10:21:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.877 10:21:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.877 10:21:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.877 10:21:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.877 10:21:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.877 10:21:34 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:20.877 10:21:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.877 10:21:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:20.877 10:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:20.877 10:21:34 -- nvmf/common.sh@469 -- # nvmfpid=81572 00:16:20.877 10:21:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:20.877 10:21:34 -- nvmf/common.sh@470 -- # waitforlisten 81572 00:16:20.877 10:21:34 -- common/autotest_common.sh@819 -- # '[' -z 81572 ']' 00:16:20.877 10:21:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.877 10:21:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:20.877 10:21:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.877 10:21:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:20.877 10:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:20.877 [2024-07-26 10:21:34.175004] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:20.877 [2024-07-26 10:21:34.175087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.877 [2024-07-26 10:21:34.315195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.135 [2024-07-26 10:21:34.390266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.135 [2024-07-26 10:21:34.390483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.135 [2024-07-26 10:21:34.390495] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.135 [2024-07-26 10:21:34.390504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.135 [2024-07-26 10:21:34.390666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.135 [2024-07-26 10:21:34.390856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.135 [2024-07-26 10:21:34.390854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.068 10:21:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:22.068 10:21:35 -- common/autotest_common.sh@852 -- # return 0 00:16:22.068 10:21:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:22.068 10:21:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:22.068 10:21:35 -- common/autotest_common.sh@10 -- # set +x 00:16:22.068 10:21:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.068 10:21:35 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.068 [2024-07-26 10:21:35.522571] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.326 10:21:35 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:22.594 Malloc0 00:16:22.594 10:21:35 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.866 10:21:36 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.124 10:21:36 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.124 [2024-07-26 10:21:36.580025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.382 10:21:36 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:23.382 [2024-07-26 10:21:36.796189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:23.382 10:21:36 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:23.641 [2024-07-26 10:21:37.004505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:23.641 10:21:37 -- host/failover.sh@31 -- # bdevperf_pid=81635 00:16:23.641 10:21:37 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:23.641 10:21:37 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:23.641 10:21:37 -- host/failover.sh@34 -- # waitforlisten 81635 /var/tmp/bdevperf.sock 00:16:23.641 10:21:37 -- common/autotest_common.sh@819 -- # '[' -z 81635 ']' 00:16:23.641 10:21:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.641 10:21:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.641 10:21:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.641 10:21:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.641 10:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:25.015 10:21:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:25.015 10:21:38 -- common/autotest_common.sh@852 -- # return 0 00:16:25.015 10:21:38 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.015 NVMe0n1 00:16:25.015 10:21:38 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.273 00:16:25.273 10:21:38 -- host/failover.sh@39 -- # run_test_pid=81653 00:16:25.273 10:21:38 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:25.273 10:21:38 -- host/failover.sh@41 -- # sleep 1 00:16:26.648 10:21:39 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.648 [2024-07-26 10:21:39.937136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.648 [2024-07-26 10:21:39.937343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.649 [2024-07-26 10:21:39.937351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.649 [2024-07-26 10:21:39.937359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.649 [2024-07-26 10:21:39.937367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68e70 is same with the state(5) to be set 00:16:26.649 10:21:39 -- host/failover.sh@45 -- # sleep 3 00:16:29.930 10:21:42 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:29.930 00:16:29.930 10:21:43 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:30.187 [2024-07-26 10:21:43.534324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.187 [2024-07-26 10:21:43.534506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 [2024-07-26 10:21:43.534627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69550 is same with the state(5) to be set 00:16:30.188 10:21:43 -- host/failover.sh@50 -- # sleep 3 00:16:33.463 10:21:46 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.463 [2024-07-26 10:21:46.835888] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.463 10:21:46 -- host/failover.sh@55 -- # sleep 1 00:16:34.835 10:21:47 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:34.835 [2024-07-26 10:21:48.096432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 [2024-07-26 10:21:48.096608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ec0 is same with the state(5) to be set 00:16:34.835 10:21:48 -- host/failover.sh@59 -- # wait 81653 00:16:41.411 0 00:16:41.411 10:21:53 -- host/failover.sh@61 -- # killprocess 81635 00:16:41.411 10:21:53 -- common/autotest_common.sh@926 -- # '[' -z 81635 ']' 00:16:41.411 10:21:53 -- common/autotest_common.sh@930 -- # kill -0 81635 00:16:41.411 10:21:53 -- common/autotest_common.sh@931 -- # uname 00:16:41.411 10:21:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:41.411 10:21:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81635 00:16:41.411 10:21:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:41.411 10:21:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:41.411 killing process with pid 81635 00:16:41.411 10:21:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81635' 00:16:41.411 10:21:53 -- common/autotest_common.sh@945 -- # kill 81635 00:16:41.411 10:21:53 -- common/autotest_common.sh@950 -- # wait 81635 00:16:41.411 10:21:54 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.411 [2024-07-26 10:21:37.062660] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:41.411 [2024-07-26 10:21:37.062754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81635 ] 00:16:41.411 [2024-07-26 10:21:37.206354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.411 [2024-07-26 10:21:37.283012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.411 Running I/O for 15 seconds... 00:16:41.411 [2024-07-26 10:21:39.938622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.938998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.939162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.939266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.939360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.939468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.939605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.939771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.939871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.939972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.940062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.940155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.940246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.940350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.940441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.940537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.940647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.940748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.940837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.940932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.941021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.941153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.941367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.941459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.941557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.411 [2024-07-26 10:21:39.941678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.411 [2024-07-26 10:21:39.941780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.941876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.941973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.942082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.942299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.942391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.942470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.942572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.942716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.942849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.943881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.943970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.944134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.412 [2024-07-26 10:21:39.944170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.412 [2024-07-26 10:21:39.944281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.412 [2024-07-26 10:21:39.944299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.944534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.944552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.944568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.948015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.948160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.948268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.948368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.948456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.948532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.948632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.948752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.948840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.948938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.949106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.949278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.949471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.949697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.949886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.949964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.950089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.950176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.950270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.950357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.950474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.950575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.950693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.950788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.950965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.951135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.951295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.951480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.951666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.952108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.952205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.952306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.952467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.952559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.952676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.952758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.952861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.952951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.953059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.953147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.953244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.953331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.953421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.953508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.413 [2024-07-26 10:21:39.953636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.953750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.953847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.953939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.413 [2024-07-26 10:21:39.954040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.413 [2024-07-26 10:21:39.954131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.954410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.954801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.954887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.954974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.954991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.414 [2024-07-26 10:21:39.955522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.414 [2024-07-26 10:21:39.955878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.955895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e61e0 is same with the state(5) to be set 00:16:41.414 [2024-07-26 10:21:39.955916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.414 [2024-07-26 10:21:39.955929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.414 [2024-07-26 10:21:39.955942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119632 len:8 PRP1 0x0 PRP2 0x0 00:16:41.414 [2024-07-26 10:21:39.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.956021] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e61e0 was disconnected and freed. reset controller. 00:16:41.414 [2024-07-26 10:21:39.956043] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:41.414 [2024-07-26 10:21:39.956112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.414 [2024-07-26 10:21:39.956137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.414 [2024-07-26 10:21:39.956156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.415 [2024-07-26 10:21:39.956172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:39.956188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.415 [2024-07-26 10:21:39.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:39.956220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.415 [2024-07-26 10:21:39.956236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:39.956252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:41.415 [2024-07-26 10:21:39.956321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d4ea0 (9): Bad file descriptor 00:16:41.415 [2024-07-26 10:21:39.958857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.415 [2024-07-26 10:21:39.990460] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.415 [2024-07-26 10:21:43.534693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.534980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.534998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.415 [2024-07-26 10:21:43.535775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.415 [2024-07-26 10:21:43.535882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.415 [2024-07-26 10:21:43.535923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.415 [2024-07-26 10:21:43.535949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.415 [2024-07-26 10:21:43.535965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.535983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.536978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.536994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.537028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.537063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.537103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.537139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.537184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.416 [2024-07-26 10:21:43.537219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.416 [2024-07-26 10:21:43.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.416 [2024-07-26 10:21:43.537254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.537288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.537676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.537820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.537854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.537958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.537977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.537993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.417 [2024-07-26 10:21:43.538662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.417 [2024-07-26 10:21:43.538680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.417 [2024-07-26 10:21:43.538697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.418 [2024-07-26 10:21:43.538884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.538973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.538994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.418 [2024-07-26 10:21:43.539075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.418 [2024-07-26 10:21:43.539144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.418 [2024-07-26 10:21:43.539247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:43.539469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6e70 is same with the state(5) to be set 00:16:41.418 [2024-07-26 10:21:43.539516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.418 [2024-07-26 10:21:43.539531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.418 [2024-07-26 10:21:43.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89848 len:8 PRP1 0x0 PRP2 0x0 00:16:41.418 [2024-07-26 10:21:43.539559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539641] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e6e70 was disconnected and freed. reset controller. 00:16:41.418 [2024-07-26 10:21:43.539665] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:41.418 [2024-07-26 10:21:43.539743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.418 [2024-07-26 10:21:43.539770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.418 [2024-07-26 10:21:43.539804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.418 [2024-07-26 10:21:43.539836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.418 [2024-07-26 10:21:43.539869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:43.539885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:41.418 [2024-07-26 10:21:43.539947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d4ea0 (9): Bad file descriptor 00:16:41.418 [2024-07-26 10:21:43.542248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.418 [2024-07-26 10:21:43.577448] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.418 [2024-07-26 10:21:48.096671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.096982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.096995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.097010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.418 [2024-07-26 10:21:48.097023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.418 [2024-07-26 10:21:48.097038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.097944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.097974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.097989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.098003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.098032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.098069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.098098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.098127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.098185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.419 [2024-07-26 10:21:48.098214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.419 [2024-07-26 10:21:48.098229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.419 [2024-07-26 10:21:48.098243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.098946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.098976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.098991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.099006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.420 [2024-07-26 10:21:48.099064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.420 [2024-07-26 10:21:48.099269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.420 [2024-07-26 10:21:48.099283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.099882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.099969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.100215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.100307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.421 [2024-07-26 10:21:48.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.421 [2024-07-26 10:21:48.100477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.421 [2024-07-26 10:21:48.100490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.422 [2024-07-26 10:21:48.100519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.422 [2024-07-26 10:21:48.100548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.422 [2024-07-26 10:21:48.100587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.422 [2024-07-26 10:21:48.100618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.422 [2024-07-26 10:21:48.100652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7c70 is same with the state(5) to be set 00:16:41.422 [2024-07-26 10:21:48.100685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:41.422 [2024-07-26 10:21:48.100696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:41.422 [2024-07-26 10:21:48.100707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52752 len:8 PRP1 0x0 PRP2 0x0 00:16:41.422 [2024-07-26 10:21:48.100720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100784] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18d7c70 was disconnected and freed. reset controller. 00:16:41.422 [2024-07-26 10:21:48.100802] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:41.422 [2024-07-26 10:21:48.100859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.422 [2024-07-26 10:21:48.100890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.422 [2024-07-26 10:21:48.100920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.422 [2024-07-26 10:21:48.100947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.422 [2024-07-26 10:21:48.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.422 [2024-07-26 10:21:48.100988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:41.422 [2024-07-26 10:21:48.103370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.422 [2024-07-26 10:21:48.103408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d4ea0 (9): Bad file descriptor 00:16:41.422 [2024-07-26 10:21:48.137874] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.422 00:16:41.422 Latency(us) 00:16:41.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.422 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:41.422 Verification LBA range: start 0x0 length 0x4000 00:16:41.422 NVMe0n1 : 15.01 12813.09 50.05 324.42 0.00 9723.43 472.90 25737.77 00:16:41.422 =================================================================================================================== 00:16:41.422 Total : 12813.09 50.05 324.42 0.00 9723.43 472.90 25737.77 00:16:41.422 Received shutdown signal, test time was about 15.000000 seconds 00:16:41.422 00:16:41.422 Latency(us) 00:16:41.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.422 =================================================================================================================== 00:16:41.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:41.422 10:21:54 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:41.422 10:21:54 -- host/failover.sh@65 -- # count=3 00:16:41.422 10:21:54 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:41.422 10:21:54 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:41.422 10:21:54 -- host/failover.sh@73 -- # bdevperf_pid=81829 00:16:41.422 10:21:54 -- host/failover.sh@75 -- # waitforlisten 81829 /var/tmp/bdevperf.sock 00:16:41.422 10:21:54 -- common/autotest_common.sh@819 -- # '[' -z 81829 ']' 00:16:41.422 10:21:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.422 10:21:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.422 10:21:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.422 10:21:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.422 10:21:54 -- common/autotest_common.sh@10 -- # set +x 00:16:41.680 10:21:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.681 10:21:54 -- common/autotest_common.sh@852 -- # return 0 00:16:41.681 10:21:54 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:41.939 [2024-07-26 10:21:55.225501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:41.939 10:21:55 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:42.254 [2024-07-26 10:21:55.453709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:42.254 10:21:55 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:42.512 NVMe0n1 00:16:42.512 10:21:55 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:42.771 00:16:42.771 10:21:56 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.029 00:16:43.029 10:21:56 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:43.030 10:21:56 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:43.288 10:21:56 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.547 10:21:56 -- host/failover.sh@87 -- # sleep 3 00:16:46.833 10:21:59 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:46.833 10:21:59 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:46.833 10:22:00 -- host/failover.sh@90 -- # run_test_pid=81908 00:16:46.833 10:22:00 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:46.833 10:22:00 -- host/failover.sh@92 -- # wait 81908 00:16:48.224 0 00:16:48.224 10:22:01 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.224 [2024-07-26 10:21:54.112750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:48.224 [2024-07-26 10:21:54.113631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81829 ] 00:16:48.224 [2024-07-26 10:21:54.260119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.224 [2024-07-26 10:21:54.355804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.224 [2024-07-26 10:21:56.865326] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:48.224 [2024-07-26 10:21:56.865457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.224 [2024-07-26 10:21:56.865482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.224 [2024-07-26 10:21:56.865499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.224 [2024-07-26 10:21:56.865513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.224 [2024-07-26 10:21:56.865527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.224 [2024-07-26 10:21:56.865541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.224 [2024-07-26 10:21:56.865554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.224 [2024-07-26 10:21:56.865567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.224 [2024-07-26 10:21:56.865594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:48.224 [2024-07-26 10:21:56.865648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:48.224 [2024-07-26 10:21:56.865680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c6ea0 (9): Bad file descriptor 00:16:48.224 [2024-07-26 10:21:56.876380] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.224 Running I/O for 1 seconds... 00:16:48.224 00:16:48.224 Latency(us) 00:16:48.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:48.224 Verification LBA range: start 0x0 length 0x4000 00:16:48.224 NVMe0n1 : 1.01 12974.75 50.68 0.00 0.00 9813.66 1050.07 14298.76 00:16:48.224 =================================================================================================================== 00:16:48.224 Total : 12974.75 50.68 0.00 0.00 9813.66 1050.07 14298.76 00:16:48.224 10:22:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:48.224 10:22:01 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:48.224 10:22:01 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.483 10:22:01 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:48.483 10:22:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:48.741 10:22:02 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:49.000 10:22:02 -- host/failover.sh@101 -- # sleep 3 00:16:52.293 10:22:05 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:52.293 10:22:05 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:52.293 10:22:05 -- host/failover.sh@108 -- # killprocess 81829 00:16:52.293 10:22:05 -- common/autotest_common.sh@926 -- # '[' -z 81829 ']' 00:16:52.293 10:22:05 -- common/autotest_common.sh@930 -- # kill -0 81829 00:16:52.293 10:22:05 -- common/autotest_common.sh@931 -- # uname 00:16:52.293 10:22:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.293 10:22:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81829 00:16:52.293 10:22:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:52.293 10:22:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:52.293 10:22:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81829' 00:16:52.293 killing process with pid 81829 00:16:52.293 10:22:05 -- common/autotest_common.sh@945 -- # kill 81829 00:16:52.293 10:22:05 -- common/autotest_common.sh@950 -- # wait 81829 00:16:52.562 10:22:05 -- host/failover.sh@110 -- # sync 00:16:52.562 10:22:05 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.838 10:22:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:52.838 10:22:06 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:52.838 10:22:06 -- host/failover.sh@116 -- # nvmftestfini 00:16:52.838 10:22:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:52.838 10:22:06 -- nvmf/common.sh@116 -- # sync 00:16:52.838 10:22:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:52.838 10:22:06 -- nvmf/common.sh@119 -- # set +e 00:16:52.838 10:22:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:52.838 10:22:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:52.838 rmmod nvme_tcp 00:16:52.838 rmmod nvme_fabrics 00:16:52.838 rmmod nvme_keyring 00:16:52.838 10:22:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:52.838 10:22:06 -- nvmf/common.sh@123 -- # set -e 00:16:52.838 10:22:06 -- nvmf/common.sh@124 -- # return 0 00:16:52.838 10:22:06 -- nvmf/common.sh@477 -- # '[' -n 81572 ']' 00:16:52.838 10:22:06 -- nvmf/common.sh@478 -- # killprocess 81572 00:16:52.838 10:22:06 -- common/autotest_common.sh@926 -- # '[' -z 81572 ']' 00:16:52.838 10:22:06 -- common/autotest_common.sh@930 -- # kill -0 81572 00:16:52.838 10:22:06 -- common/autotest_common.sh@931 -- # uname 00:16:52.838 10:22:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.838 10:22:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81572 00:16:52.838 10:22:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:52.838 10:22:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:52.838 killing process with pid 81572 00:16:52.838 10:22:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81572' 00:16:52.838 10:22:06 -- common/autotest_common.sh@945 -- # kill 81572 00:16:52.838 10:22:06 -- common/autotest_common.sh@950 -- # wait 81572 00:16:53.096 10:22:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.097 10:22:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:53.097 10:22:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:53.097 10:22:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.097 10:22:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:53.097 10:22:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.097 10:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.097 10:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.097 10:22:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:53.097 00:16:53.097 real 0m32.760s 00:16:53.097 user 2m6.907s 00:16:53.097 sys 0m5.539s 00:16:53.097 10:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.097 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.097 ************************************ 00:16:53.097 END TEST nvmf_failover 00:16:53.097 ************************************ 00:16:53.097 10:22:06 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:53.097 10:22:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:53.097 10:22:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.097 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.097 ************************************ 00:16:53.097 START TEST nvmf_discovery 00:16:53.097 ************************************ 00:16:53.097 10:22:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:53.097 * Looking for test storage... 00:16:53.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.356 10:22:06 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.356 10:22:06 -- nvmf/common.sh@7 -- # uname -s 00:16:53.356 10:22:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.356 10:22:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.356 10:22:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.356 10:22:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.356 10:22:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.356 10:22:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.356 10:22:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.356 10:22:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.356 10:22:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.356 10:22:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:16:53.356 10:22:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:16:53.356 10:22:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.356 10:22:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.356 10:22:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.356 10:22:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.356 10:22:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.356 10:22:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.356 10:22:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.356 10:22:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.356 10:22:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.356 10:22:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.356 10:22:06 -- paths/export.sh@5 -- # export PATH 00:16:53.356 10:22:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.356 10:22:06 -- nvmf/common.sh@46 -- # : 0 00:16:53.356 10:22:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.356 10:22:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.356 10:22:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.356 10:22:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.356 10:22:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.356 10:22:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.356 10:22:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.356 10:22:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.356 10:22:06 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:53.356 10:22:06 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:53.356 10:22:06 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:53.356 10:22:06 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:53.356 10:22:06 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:53.356 10:22:06 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:53.356 10:22:06 -- host/discovery.sh@25 -- # nvmftestinit 00:16:53.356 10:22:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.356 10:22:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.356 10:22:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.356 10:22:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.356 10:22:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.356 10:22:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.356 10:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.356 10:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.356 10:22:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:53.356 10:22:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:53.356 10:22:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.356 10:22:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.356 10:22:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.356 10:22:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:53.356 10:22:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.356 10:22:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.356 10:22:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.356 10:22:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.356 10:22:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.356 10:22:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.356 10:22:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.356 10:22:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.356 10:22:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:53.356 10:22:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:53.357 Cannot find device "nvmf_tgt_br" 00:16:53.357 10:22:06 -- nvmf/common.sh@154 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.357 Cannot find device "nvmf_tgt_br2" 00:16:53.357 10:22:06 -- nvmf/common.sh@155 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:53.357 10:22:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:53.357 Cannot find device "nvmf_tgt_br" 00:16:53.357 10:22:06 -- nvmf/common.sh@157 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:53.357 Cannot find device "nvmf_tgt_br2" 00:16:53.357 10:22:06 -- nvmf/common.sh@158 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:53.357 10:22:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:53.357 10:22:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.357 10:22:06 -- nvmf/common.sh@161 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.357 10:22:06 -- nvmf/common.sh@162 -- # true 00:16:53.357 10:22:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.357 10:22:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.357 10:22:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.357 10:22:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.357 10:22:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.357 10:22:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.357 10:22:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.357 10:22:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.357 10:22:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.357 10:22:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:53.357 10:22:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:53.357 10:22:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:53.357 10:22:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:53.357 10:22:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.357 10:22:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.615 10:22:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.615 10:22:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:53.615 10:22:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:53.615 10:22:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.615 10:22:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.615 10:22:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.615 10:22:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.615 10:22:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.616 10:22:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:53.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:53.616 00:16:53.616 --- 10.0.0.2 ping statistics --- 00:16:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.616 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:53.616 10:22:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:53.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:16:53.616 00:16:53.616 --- 10.0.0.3 ping statistics --- 00:16:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.616 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:53.616 10:22:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:53.616 00:16:53.616 --- 10.0.0.1 ping statistics --- 00:16:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.616 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:53.616 10:22:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.616 10:22:06 -- nvmf/common.sh@421 -- # return 0 00:16:53.616 10:22:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:53.616 10:22:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.616 10:22:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:53.616 10:22:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:53.616 10:22:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.616 10:22:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:53.616 10:22:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:53.616 10:22:06 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:53.616 10:22:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:53.616 10:22:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.616 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.616 10:22:06 -- nvmf/common.sh@469 -- # nvmfpid=82182 00:16:53.616 10:22:06 -- nvmf/common.sh@470 -- # waitforlisten 82182 00:16:53.616 10:22:06 -- common/autotest_common.sh@819 -- # '[' -z 82182 ']' 00:16:53.616 10:22:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.616 10:22:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.616 10:22:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.616 10:22:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.616 10:22:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.616 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.616 [2024-07-26 10:22:06.968837] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:53.616 [2024-07-26 10:22:06.968937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.875 [2024-07-26 10:22:07.109765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.875 [2024-07-26 10:22:07.208843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:53.875 [2024-07-26 10:22:07.209026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.875 [2024-07-26 10:22:07.209045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.875 [2024-07-26 10:22:07.209056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.875 [2024-07-26 10:22:07.209085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.811 10:22:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.811 10:22:07 -- common/autotest_common.sh@852 -- # return 0 00:16:54.811 10:22:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:54.811 10:22:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:54.811 10:22:07 -- common/autotest_common.sh@10 -- # set +x 00:16:54.811 10:22:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.811 10:22:07 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.812 10:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.812 10:22:07 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 [2024-07-26 10:22:07.999669] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.812 10:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.812 10:22:08 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:54.812 10:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.812 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 [2024-07-26 10:22:08.007828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:54.812 10:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.812 10:22:08 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:54.812 10:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.812 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 null0 00:16:54.812 10:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.812 10:22:08 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:54.812 10:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.812 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 null1 00:16:54.812 10:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.812 10:22:08 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:54.812 10:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.812 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 10:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.812 10:22:08 -- host/discovery.sh@45 -- # hostpid=82213 00:16:54.812 10:22:08 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:54.812 10:22:08 -- host/discovery.sh@46 -- # waitforlisten 82213 /tmp/host.sock 00:16:54.812 10:22:08 -- common/autotest_common.sh@819 -- # '[' -z 82213 ']' 00:16:54.812 10:22:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:16:54.812 10:22:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:54.812 10:22:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:54.812 10:22:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.812 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.812 [2024-07-26 10:22:08.083401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:54.812 [2024-07-26 10:22:08.083483] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82213 ] 00:16:54.812 [2024-07-26 10:22:08.217524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.071 [2024-07-26 10:22:08.308363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.071 [2024-07-26 10:22:08.308516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.638 10:22:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.638 10:22:09 -- common/autotest_common.sh@852 -- # return 0 00:16:55.638 10:22:09 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.638 10:22:09 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:55.638 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.638 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.638 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.638 10:22:09 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:55.638 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.638 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@72 -- # notify_id=0 00:16:55.898 10:22:09 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # sort 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # xargs 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:55.898 10:22:09 -- host/discovery.sh@79 -- # get_bdev_list 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # sort 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # xargs 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:55.898 10:22:09 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # sort 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- host/discovery.sh@59 -- # xargs 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:55.898 10:22:09 -- host/discovery.sh@83 -- # get_bdev_list 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # sort 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:55.898 10:22:09 -- host/discovery.sh@55 -- # xargs 00:16:55.898 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.898 10:22:09 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:55.898 10:22:09 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:55.898 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.898 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # sort 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # xargs 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:56.157 10:22:09 -- host/discovery.sh@87 -- # get_bdev_list 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # sort 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # xargs 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:56.157 10:22:09 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 [2024-07-26 10:22:09.472282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # xargs 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- host/discovery.sh@59 -- # sort 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:56.157 10:22:09 -- host/discovery.sh@93 -- # get_bdev_list 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # sort 00:16:56.157 10:22:09 -- host/discovery.sh@55 -- # xargs 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.157 10:22:09 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:56.157 10:22:09 -- host/discovery.sh@94 -- # get_notification_count 00:16:56.157 10:22:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:56.157 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.157 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 10:22:09 -- host/discovery.sh@74 -- # jq '. | length' 00:16:56.157 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.416 10:22:09 -- host/discovery.sh@74 -- # notification_count=0 00:16:56.416 10:22:09 -- host/discovery.sh@75 -- # notify_id=0 00:16:56.416 10:22:09 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:56.416 10:22:09 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:56.416 10:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.416 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.416 10:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.416 10:22:09 -- host/discovery.sh@100 -- # sleep 1 00:16:56.674 [2024-07-26 10:22:10.102683] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:56.674 [2024-07-26 10:22:10.102730] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:56.674 [2024-07-26 10:22:10.102750] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:56.674 [2024-07-26 10:22:10.108748] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:56.932 [2024-07-26 10:22:10.165005] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:56.932 [2024-07-26 10:22:10.165051] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:57.499 10:22:10 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:57.499 10:22:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:57.499 10:22:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:57.499 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.499 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.499 10:22:10 -- host/discovery.sh@59 -- # sort 00:16:57.499 10:22:10 -- host/discovery.sh@59 -- # xargs 00:16:57.499 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@102 -- # get_bdev_list 00:16:57.499 10:22:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.499 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.499 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.499 10:22:10 -- host/discovery.sh@55 -- # sort 00:16:57.499 10:22:10 -- host/discovery.sh@55 -- # xargs 00:16:57.499 10:22:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:57.499 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:57.499 10:22:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:57.499 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.499 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.499 10:22:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:57.499 10:22:10 -- host/discovery.sh@63 -- # sort -n 00:16:57.499 10:22:10 -- host/discovery.sh@63 -- # xargs 00:16:57.499 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@104 -- # get_notification_count 00:16:57.499 10:22:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:57.499 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.499 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.499 10:22:10 -- host/discovery.sh@74 -- # jq '. | length' 00:16:57.499 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@74 -- # notification_count=1 00:16:57.499 10:22:10 -- host/discovery.sh@75 -- # notify_id=1 00:16:57.499 10:22:10 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:57.499 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.499 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.499 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.499 10:22:10 -- host/discovery.sh@109 -- # sleep 1 00:16:58.434 10:22:11 -- host/discovery.sh@110 -- # get_bdev_list 00:16:58.434 10:22:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.434 10:22:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.434 10:22:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.434 10:22:11 -- host/discovery.sh@55 -- # sort 00:16:58.434 10:22:11 -- common/autotest_common.sh@10 -- # set +x 00:16:58.692 10:22:11 -- host/discovery.sh@55 -- # xargs 00:16:58.692 10:22:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.692 10:22:11 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:58.692 10:22:11 -- host/discovery.sh@111 -- # get_notification_count 00:16:58.692 10:22:11 -- host/discovery.sh@74 -- # jq '. | length' 00:16:58.692 10:22:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:58.692 10:22:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.692 10:22:11 -- common/autotest_common.sh@10 -- # set +x 00:16:58.692 10:22:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.692 10:22:11 -- host/discovery.sh@74 -- # notification_count=1 00:16:58.692 10:22:11 -- host/discovery.sh@75 -- # notify_id=2 00:16:58.692 10:22:11 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:58.692 10:22:12 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:58.692 10:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.692 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:16:58.692 [2024-07-26 10:22:12.007441] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:58.692 [2024-07-26 10:22:12.008268] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:58.692 [2024-07-26 10:22:12.008305] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:58.692 10:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.692 10:22:12 -- host/discovery.sh@117 -- # sleep 1 00:16:58.692 [2024-07-26 10:22:12.014263] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:58.692 [2024-07-26 10:22:12.075535] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:58.693 [2024-07-26 10:22:12.075608] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:58.693 [2024-07-26 10:22:12.075616] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:59.627 10:22:13 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:59.627 10:22:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:59.627 10:22:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:59.627 10:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.627 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.627 10:22:13 -- host/discovery.sh@59 -- # sort 00:16:59.627 10:22:13 -- host/discovery.sh@59 -- # xargs 00:16:59.627 10:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.627 10:22:13 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.627 10:22:13 -- host/discovery.sh@119 -- # get_bdev_list 00:16:59.627 10:22:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.885 10:22:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:59.885 10:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.885 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.885 10:22:13 -- host/discovery.sh@55 -- # sort 00:16:59.885 10:22:13 -- host/discovery.sh@55 -- # xargs 00:16:59.885 10:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:59.885 10:22:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:59.885 10:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.885 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.885 10:22:13 -- host/discovery.sh@63 -- # sort -n 00:16:59.885 10:22:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:59.885 10:22:13 -- host/discovery.sh@63 -- # xargs 00:16:59.885 10:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@121 -- # get_notification_count 00:16:59.885 10:22:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:59.885 10:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.885 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.885 10:22:13 -- host/discovery.sh@74 -- # jq '. | length' 00:16:59.885 10:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@74 -- # notification_count=0 00:16:59.885 10:22:13 -- host/discovery.sh@75 -- # notify_id=2 00:16:59.885 10:22:13 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:59.885 10:22:13 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:59.885 10:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.885 10:22:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.885 [2024-07-26 10:22:13.269707] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:59.885 [2024-07-26 10:22:13.269745] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:59.885 [2024-07-26 10:22:13.272458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.885 [2024-07-26 10:22:13.272503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.885 [2024-07-26 10:22:13.272517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.886 [2024-07-26 10:22:13.272527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.886 [2024-07-26 10:22:13.272537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.886 [2024-07-26 10:22:13.272546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.886 [2024-07-26 10:22:13.272556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.886 [2024-07-26 10:22:13.272565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.886 [2024-07-26 10:22:13.272583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16740c0 is same with the state(5) to be set 00:16:59.886 10:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.886 10:22:13 -- host/discovery.sh@127 -- # sleep 1 00:16:59.886 [2024-07-26 10:22:13.275707] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:59.886 [2024-07-26 10:22:13.275737] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:59.886 [2024-07-26 10:22:13.275813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16740c0 (9): Bad file descriptor 00:17:01.260 10:22:14 -- host/discovery.sh@128 -- # get_subsystem_names 00:17:01.260 10:22:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:01.260 10:22:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:01.260 10:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.260 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 10:22:14 -- host/discovery.sh@59 -- # sort 00:17:01.260 10:22:14 -- host/discovery.sh@59 -- # xargs 00:17:01.260 10:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@129 -- # get_bdev_list 00:17:01.260 10:22:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.260 10:22:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:01.260 10:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.260 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 10:22:14 -- host/discovery.sh@55 -- # xargs 00:17:01.260 10:22:14 -- host/discovery.sh@55 -- # sort 00:17:01.260 10:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:17:01.260 10:22:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:01.260 10:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.260 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 10:22:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:01.260 10:22:14 -- host/discovery.sh@63 -- # sort -n 00:17:01.260 10:22:14 -- host/discovery.sh@63 -- # xargs 00:17:01.260 10:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@131 -- # get_notification_count 00:17:01.260 10:22:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:01.260 10:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.260 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 10:22:14 -- host/discovery.sh@74 -- # jq '. | length' 00:17:01.260 10:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@74 -- # notification_count=0 00:17:01.260 10:22:14 -- host/discovery.sh@75 -- # notify_id=2 00:17:01.260 10:22:14 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:01.260 10:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.260 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 10:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.260 10:22:14 -- host/discovery.sh@135 -- # sleep 1 00:17:02.195 10:22:15 -- host/discovery.sh@136 -- # get_subsystem_names 00:17:02.195 10:22:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:02.195 10:22:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:02.195 10:22:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.195 10:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 10:22:15 -- host/discovery.sh@59 -- # sort 00:17:02.195 10:22:15 -- host/discovery.sh@59 -- # xargs 00:17:02.195 10:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.195 10:22:15 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:17:02.195 10:22:15 -- host/discovery.sh@137 -- # get_bdev_list 00:17:02.195 10:22:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.195 10:22:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:02.195 10:22:15 -- host/discovery.sh@55 -- # sort 00:17:02.195 10:22:15 -- host/discovery.sh@55 -- # xargs 00:17:02.195 10:22:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.195 10:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 10:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.454 10:22:15 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:17:02.454 10:22:15 -- host/discovery.sh@138 -- # get_notification_count 00:17:02.454 10:22:15 -- host/discovery.sh@74 -- # jq '. | length' 00:17:02.454 10:22:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:02.454 10:22:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.454 10:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:02.454 10:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.454 10:22:15 -- host/discovery.sh@74 -- # notification_count=2 00:17:02.454 10:22:15 -- host/discovery.sh@75 -- # notify_id=4 00:17:02.454 10:22:15 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:17:02.454 10:22:15 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.454 10:22:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.454 10:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:03.389 [2024-07-26 10:22:16.735446] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:03.389 [2024-07-26 10:22:16.735485] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:03.389 [2024-07-26 10:22:16.735502] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:03.389 [2024-07-26 10:22:16.741482] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:03.389 [2024-07-26 10:22:16.801115] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:03.389 [2024-07-26 10:22:16.801176] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:03.389 10:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.389 10:22:16 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.389 10:22:16 -- common/autotest_common.sh@640 -- # local es=0 00:17:03.389 10:22:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.389 10:22:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:17:03.389 10:22:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.389 10:22:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:17:03.389 10:22:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.389 10:22:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.389 10:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.389 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.389 request: 00:17:03.389 { 00:17:03.389 "name": "nvme", 00:17:03.389 "trtype": "tcp", 00:17:03.389 "traddr": "10.0.0.2", 00:17:03.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:03.389 "adrfam": "ipv4", 00:17:03.389 "trsvcid": "8009", 00:17:03.389 "wait_for_attach": true, 00:17:03.389 "method": "bdev_nvme_start_discovery", 00:17:03.389 "req_id": 1 00:17:03.389 } 00:17:03.389 Got JSON-RPC error response 00:17:03.389 response: 00:17:03.389 { 00:17:03.389 "code": -17, 00:17:03.389 "message": "File exists" 00:17:03.389 } 00:17:03.389 10:22:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:03.389 10:22:16 -- common/autotest_common.sh@643 -- # es=1 00:17:03.389 10:22:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:03.389 10:22:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:03.389 10:22:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:03.389 10:22:16 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:17:03.389 10:22:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:03.389 10:22:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:03.389 10:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.389 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.389 10:22:16 -- host/discovery.sh@67 -- # xargs 00:17:03.389 10:22:16 -- host/discovery.sh@67 -- # sort 00:17:03.389 10:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.648 10:22:16 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:17:03.648 10:22:16 -- host/discovery.sh@147 -- # get_bdev_list 00:17:03.648 10:22:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.648 10:22:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:03.648 10:22:16 -- host/discovery.sh@55 -- # sort 00:17:03.648 10:22:16 -- host/discovery.sh@55 -- # xargs 00:17:03.648 10:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.648 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.648 10:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.648 10:22:16 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:03.648 10:22:16 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.648 10:22:16 -- common/autotest_common.sh@640 -- # local es=0 00:17:03.648 10:22:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.648 10:22:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:17:03.648 10:22:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.648 10:22:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:17:03.648 10:22:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.648 10:22:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:03.648 10:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.648 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.648 request: 00:17:03.648 { 00:17:03.648 "name": "nvme_second", 00:17:03.648 "trtype": "tcp", 00:17:03.648 "traddr": "10.0.0.2", 00:17:03.648 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:03.648 "adrfam": "ipv4", 00:17:03.648 "trsvcid": "8009", 00:17:03.648 "wait_for_attach": true, 00:17:03.648 "method": "bdev_nvme_start_discovery", 00:17:03.648 "req_id": 1 00:17:03.648 } 00:17:03.648 Got JSON-RPC error response 00:17:03.648 response: 00:17:03.648 { 00:17:03.648 "code": -17, 00:17:03.648 "message": "File exists" 00:17:03.648 } 00:17:03.648 10:22:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:03.648 10:22:16 -- common/autotest_common.sh@643 -- # es=1 00:17:03.648 10:22:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:03.648 10:22:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:03.648 10:22:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:03.648 10:22:16 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:17:03.648 10:22:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:03.649 10:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.649 10:22:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.649 10:22:16 -- host/discovery.sh@67 -- # sort 00:17:03.649 10:22:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:03.649 10:22:16 -- host/discovery.sh@67 -- # xargs 00:17:03.649 10:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.649 10:22:16 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:17:03.649 10:22:17 -- host/discovery.sh@153 -- # get_bdev_list 00:17:03.649 10:22:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.649 10:22:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:03.649 10:22:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.649 10:22:17 -- host/discovery.sh@55 -- # sort 00:17:03.649 10:22:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.649 10:22:17 -- host/discovery.sh@55 -- # xargs 00:17:03.649 10:22:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.649 10:22:17 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:03.649 10:22:17 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.649 10:22:17 -- common/autotest_common.sh@640 -- # local es=0 00:17:03.649 10:22:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.649 10:22:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:17:03.649 10:22:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.649 10:22:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:17:03.649 10:22:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.649 10:22:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.649 10:22:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.649 10:22:17 -- common/autotest_common.sh@10 -- # set +x 00:17:05.025 [2024-07-26 10:22:18.062802] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.025 [2024-07-26 10:22:18.062929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.025 [2024-07-26 10:22:18.062975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.025 [2024-07-26 10:22:18.062991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c5a0 with addr=10.0.0.2, port=8010 00:17:05.025 [2024-07-26 10:22:18.063015] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:05.025 [2024-07-26 10:22:18.063025] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:05.025 [2024-07-26 10:22:18.063035] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:05.962 [2024-07-26 10:22:19.062798] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.962 [2024-07-26 10:22:19.062907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.962 [2024-07-26 10:22:19.062952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.962 [2024-07-26 10:22:19.062968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c5a0 with addr=10.0.0.2, port=8010 00:17:05.962 [2024-07-26 10:22:19.062991] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:05.962 [2024-07-26 10:22:19.063001] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:05.962 [2024-07-26 10:22:19.063011] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:06.898 [2024-07-26 10:22:20.062653] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:06.898 request: 00:17:06.898 { 00:17:06.898 "name": "nvme_second", 00:17:06.898 "trtype": "tcp", 00:17:06.898 "traddr": "10.0.0.2", 00:17:06.898 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:06.898 "adrfam": "ipv4", 00:17:06.898 "trsvcid": "8010", 00:17:06.898 "attach_timeout_ms": 3000, 00:17:06.898 "method": "bdev_nvme_start_discovery", 00:17:06.898 "req_id": 1 00:17:06.898 } 00:17:06.898 Got JSON-RPC error response 00:17:06.898 response: 00:17:06.898 { 00:17:06.898 "code": -110, 00:17:06.898 "message": "Connection timed out" 00:17:06.898 } 00:17:06.898 10:22:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:06.898 10:22:20 -- common/autotest_common.sh@643 -- # es=1 00:17:06.898 10:22:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.898 10:22:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.898 10:22:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.898 10:22:20 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:17:06.898 10:22:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:06.898 10:22:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:06.898 10:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.898 10:22:20 -- host/discovery.sh@67 -- # sort 00:17:06.898 10:22:20 -- common/autotest_common.sh@10 -- # set +x 00:17:06.898 10:22:20 -- host/discovery.sh@67 -- # xargs 00:17:06.898 10:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.898 10:22:20 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:17:06.898 10:22:20 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:17:06.898 10:22:20 -- host/discovery.sh@162 -- # kill 82213 00:17:06.898 10:22:20 -- host/discovery.sh@163 -- # nvmftestfini 00:17:06.898 10:22:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.898 10:22:20 -- nvmf/common.sh@116 -- # sync 00:17:06.898 10:22:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.898 10:22:20 -- nvmf/common.sh@119 -- # set +e 00:17:06.898 10:22:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.898 10:22:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.898 rmmod nvme_tcp 00:17:06.898 rmmod nvme_fabrics 00:17:06.898 rmmod nvme_keyring 00:17:06.898 10:22:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.898 10:22:20 -- nvmf/common.sh@123 -- # set -e 00:17:06.898 10:22:20 -- nvmf/common.sh@124 -- # return 0 00:17:06.898 10:22:20 -- nvmf/common.sh@477 -- # '[' -n 82182 ']' 00:17:06.898 10:22:20 -- nvmf/common.sh@478 -- # killprocess 82182 00:17:06.898 10:22:20 -- common/autotest_common.sh@926 -- # '[' -z 82182 ']' 00:17:06.898 10:22:20 -- common/autotest_common.sh@930 -- # kill -0 82182 00:17:06.898 10:22:20 -- common/autotest_common.sh@931 -- # uname 00:17:06.898 10:22:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.898 10:22:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82182 00:17:06.898 killing process with pid 82182 00:17:06.898 10:22:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:06.898 10:22:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:06.898 10:22:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82182' 00:17:06.898 10:22:20 -- common/autotest_common.sh@945 -- # kill 82182 00:17:06.898 10:22:20 -- common/autotest_common.sh@950 -- # wait 82182 00:17:07.157 10:22:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.157 10:22:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:07.157 10:22:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:07.157 10:22:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.157 10:22:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:07.157 10:22:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.157 10:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.157 10:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.157 10:22:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:07.157 00:17:07.157 real 0m14.019s 00:17:07.157 user 0m26.871s 00:17:07.157 sys 0m2.424s 00:17:07.157 10:22:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.157 ************************************ 00:17:07.157 END TEST nvmf_discovery 00:17:07.157 ************************************ 00:17:07.157 10:22:20 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 10:22:20 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:07.158 10:22:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:07.158 10:22:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:07.158 10:22:20 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 ************************************ 00:17:07.158 START TEST nvmf_discovery_remove_ifc 00:17:07.158 ************************************ 00:17:07.158 10:22:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:07.417 * Looking for test storage... 00:17:07.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.417 10:22:20 -- nvmf/common.sh@7 -- # uname -s 00:17:07.417 10:22:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.417 10:22:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.417 10:22:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.417 10:22:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.417 10:22:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.417 10:22:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.417 10:22:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.417 10:22:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.417 10:22:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.417 10:22:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:17:07.417 10:22:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:17:07.417 10:22:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.417 10:22:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.417 10:22:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.417 10:22:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.417 10:22:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.417 10:22:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.417 10:22:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.417 10:22:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.417 10:22:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.417 10:22:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.417 10:22:20 -- paths/export.sh@5 -- # export PATH 00:17:07.417 10:22:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.417 10:22:20 -- nvmf/common.sh@46 -- # : 0 00:17:07.417 10:22:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.417 10:22:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.417 10:22:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.417 10:22:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.417 10:22:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.417 10:22:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.417 10:22:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.417 10:22:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:07.417 10:22:20 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:07.417 10:22:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:07.417 10:22:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.417 10:22:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.417 10:22:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.417 10:22:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.417 10:22:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.417 10:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.417 10:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.417 10:22:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:07.417 10:22:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:07.417 10:22:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.417 10:22:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.417 10:22:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:07.417 10:22:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:07.417 10:22:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.417 10:22:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.417 10:22:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.417 10:22:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.417 10:22:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.417 10:22:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.417 10:22:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.417 10:22:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.417 10:22:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:07.417 10:22:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:07.417 Cannot find device "nvmf_tgt_br" 00:17:07.417 10:22:20 -- nvmf/common.sh@154 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.417 Cannot find device "nvmf_tgt_br2" 00:17:07.417 10:22:20 -- nvmf/common.sh@155 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:07.417 10:22:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:07.417 Cannot find device "nvmf_tgt_br" 00:17:07.417 10:22:20 -- nvmf/common.sh@157 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:07.417 Cannot find device "nvmf_tgt_br2" 00:17:07.417 10:22:20 -- nvmf/common.sh@158 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:07.417 10:22:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:07.417 10:22:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.417 10:22:20 -- nvmf/common.sh@161 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.417 10:22:20 -- nvmf/common.sh@162 -- # true 00:17:07.417 10:22:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.417 10:22:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.417 10:22:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.417 10:22:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.417 10:22:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.417 10:22:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.417 10:22:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.417 10:22:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.417 10:22:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.417 10:22:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:07.677 10:22:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:07.677 10:22:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:07.677 10:22:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:07.677 10:22:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.677 10:22:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.677 10:22:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.677 10:22:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:07.677 10:22:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:07.677 10:22:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.677 10:22:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.677 10:22:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.677 10:22:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.677 10:22:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.677 10:22:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:07.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:07.677 00:17:07.677 --- 10.0.0.2 ping statistics --- 00:17:07.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.677 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:07.677 10:22:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:07.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:17:07.677 00:17:07.677 --- 10.0.0.3 ping statistics --- 00:17:07.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.677 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:07.677 10:22:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:07.677 00:17:07.677 --- 10.0.0.1 ping statistics --- 00:17:07.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.677 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:07.677 10:22:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.677 10:22:20 -- nvmf/common.sh@421 -- # return 0 00:17:07.677 10:22:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:07.677 10:22:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.677 10:22:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:07.677 10:22:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:07.677 10:22:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.677 10:22:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:07.677 10:22:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:07.677 10:22:20 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:07.677 10:22:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.677 10:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:07.677 10:22:20 -- common/autotest_common.sh@10 -- # set +x 00:17:07.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.677 10:22:21 -- nvmf/common.sh@469 -- # nvmfpid=82715 00:17:07.677 10:22:21 -- nvmf/common.sh@470 -- # waitforlisten 82715 00:17:07.677 10:22:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.677 10:22:21 -- common/autotest_common.sh@819 -- # '[' -z 82715 ']' 00:17:07.677 10:22:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.677 10:22:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:07.677 10:22:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.677 10:22:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:07.677 10:22:21 -- common/autotest_common.sh@10 -- # set +x 00:17:07.677 [2024-07-26 10:22:21.047917] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:07.677 [2024-07-26 10:22:21.048004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.936 [2024-07-26 10:22:21.185153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.936 [2024-07-26 10:22:21.276075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.936 [2024-07-26 10:22:21.276233] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.936 [2024-07-26 10:22:21.276247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.936 [2024-07-26 10:22:21.276256] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.936 [2024-07-26 10:22:21.276282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.885 10:22:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:08.886 10:22:21 -- common/autotest_common.sh@852 -- # return 0 00:17:08.886 10:22:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.886 10:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:08.886 10:22:21 -- common/autotest_common.sh@10 -- # set +x 00:17:08.886 10:22:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.886 10:22:22 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:08.886 10:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:08.886 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:17:08.886 [2024-07-26 10:22:22.046249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.886 [2024-07-26 10:22:22.054368] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:08.886 null0 00:17:08.886 [2024-07-26 10:22:22.086326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.886 10:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:08.886 10:22:22 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82747 00:17:08.886 10:22:22 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:08.886 10:22:22 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82747 /tmp/host.sock 00:17:08.886 10:22:22 -- common/autotest_common.sh@819 -- # '[' -z 82747 ']' 00:17:08.886 10:22:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:17:08.886 10:22:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.886 10:22:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:08.886 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:08.886 10:22:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.886 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:17:08.886 [2024-07-26 10:22:22.158565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:08.886 [2024-07-26 10:22:22.158940] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82747 ] 00:17:08.886 [2024-07-26 10:22:22.298803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.144 [2024-07-26 10:22:22.389462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.144 [2024-07-26 10:22:22.389910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.711 10:22:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.711 10:22:23 -- common/autotest_common.sh@852 -- # return 0 00:17:09.711 10:22:23 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.711 10:22:23 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:09.711 10:22:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.711 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:17:09.969 10:22:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.969 10:22:23 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:09.969 10:22:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.969 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:17:09.969 10:22:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.969 10:22:23 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:09.969 10:22:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.969 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.904 [2024-07-26 10:22:24.279557] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:10.904 [2024-07-26 10:22:24.279606] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:10.904 [2024-07-26 10:22:24.279623] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:10.904 [2024-07-26 10:22:24.285598] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:10.904 [2024-07-26 10:22:24.341830] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:10.904 [2024-07-26 10:22:24.342038] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:10.904 [2024-07-26 10:22:24.342107] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:10.904 [2024-07-26 10:22:24.342239] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:10.904 [2024-07-26 10:22:24.342321] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:10.904 10:22:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.904 [2024-07-26 10:22:24.348177] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1087a40 was disconnected and freed. delete nvme_qpair. 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.904 10:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.904 10:22:24 -- common/autotest_common.sh@10 -- # set +x 00:17:10.904 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.163 10:22:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.163 10:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.163 10:22:24 -- common/autotest_common.sh@10 -- # set +x 00:17:11.163 10:22:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.163 10:22:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.097 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:12.097 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.097 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.097 10:22:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.473 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.473 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.473 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.473 10:22:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.411 10:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.411 10:22:27 -- common/autotest_common.sh@10 -- # set +x 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.411 10:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:14.411 10:22:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.347 10:22:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.347 10:22:28 -- common/autotest_common.sh@10 -- # set +x 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.347 10:22:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:15.347 10:22:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.283 10:22:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.283 10:22:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.283 10:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:16.283 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:17:16.283 10:22:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.283 10:22:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.283 10:22:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.542 10:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:16.542 [2024-07-26 10:22:29.769660] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:16.542 [2024-07-26 10:22:29.769911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.542 [2024-07-26 10:22:29.770059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.542 [2024-07-26 10:22:29.770341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.542 [2024-07-26 10:22:29.770400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.542 [2024-07-26 10:22:29.770516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.542 [2024-07-26 10:22:29.770530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.542 [2024-07-26 10:22:29.770540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.542 [2024-07-26 10:22:29.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.542 [2024-07-26 10:22:29.770561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.542 [2024-07-26 10:22:29.770571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.542 [2024-07-26 10:22:29.770599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035fc0 is same with the state(5) to be set 00:17:16.542 [2024-07-26 10:22:29.779657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1035fc0 (9): Bad file descriptor 00:17:16.542 10:22:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:16.542 10:22:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.542 [2024-07-26 10:22:29.789685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.477 10:22:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.477 10:22:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.477 10:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.477 10:22:30 -- common/autotest_common.sh@10 -- # set +x 00:17:17.477 10:22:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.477 10:22:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.477 10:22:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.477 [2024-07-26 10:22:30.820721] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:18.413 [2024-07-26 10:22:31.844718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:19.789 [2024-07-26 10:22:32.868704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:19.789 [2024-07-26 10:22:32.868834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1035fc0 with addr=10.0.0.2, port=4420 00:17:19.789 [2024-07-26 10:22:32.868864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035fc0 is same with the state(5) to be set 00:17:19.789 [2024-07-26 10:22:32.868914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:19.789 [2024-07-26 10:22:32.868933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:19.789 [2024-07-26 10:22:32.868961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:19.790 [2024-07-26 10:22:32.868982] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:19.790 [2024-07-26 10:22:32.869756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1035fc0 (9): Bad file descriptor 00:17:19.790 [2024-07-26 10:22:32.869814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:19.790 [2024-07-26 10:22:32.869861] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:19.790 [2024-07-26 10:22:32.869924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.790 [2024-07-26 10:22:32.869953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.790 [2024-07-26 10:22:32.869980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.790 [2024-07-26 10:22:32.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.790 [2024-07-26 10:22:32.870020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.790 [2024-07-26 10:22:32.870046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.790 [2024-07-26 10:22:32.870067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.790 [2024-07-26 10:22:32.870086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.790 [2024-07-26 10:22:32.870107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.790 [2024-07-26 10:22:32.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.790 [2024-07-26 10:22:32.870144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:19.790 [2024-07-26 10:22:32.870201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1035c60 (9): Bad file descriptor 00:17:19.790 [2024-07-26 10:22:32.871199] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:19.790 [2024-07-26 10:22:32.871229] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:19.790 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:19.790 10:22:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:19.790 10:22:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.724 10:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:20.724 10:22:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:20.724 10:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:20.724 10:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:20.724 10:22:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.724 10:22:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:20.724 10:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:20.724 10:22:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:20.724 10:22:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:21.663 [2024-07-26 10:22:34.875119] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:21.663 [2024-07-26 10:22:34.875159] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:21.663 [2024-07-26 10:22:34.875178] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:21.663 [2024-07-26 10:22:34.881156] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:21.663 [2024-07-26 10:22:34.936495] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:21.663 [2024-07-26 10:22:34.936549] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:21.663 [2024-07-26 10:22:34.936573] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:21.663 [2024-07-26 10:22:34.936589] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:21.663 [2024-07-26 10:22:34.936599] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:21.663 [2024-07-26 10:22:34.943798] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1036ed0 was disconnected and freed. delete nvme_qpair. 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.663 10:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.663 10:22:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.663 10:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:21.663 10:22:35 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82747 00:17:21.663 10:22:35 -- common/autotest_common.sh@926 -- # '[' -z 82747 ']' 00:17:21.663 10:22:35 -- common/autotest_common.sh@930 -- # kill -0 82747 00:17:21.663 10:22:35 -- common/autotest_common.sh@931 -- # uname 00:17:21.663 10:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.663 10:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82747 00:17:21.922 killing process with pid 82747 00:17:21.922 10:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.922 10:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.922 10:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82747' 00:17:21.922 10:22:35 -- common/autotest_common.sh@945 -- # kill 82747 00:17:21.922 10:22:35 -- common/autotest_common.sh@950 -- # wait 82747 00:17:21.922 10:22:35 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:21.922 10:22:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:21.922 10:22:35 -- nvmf/common.sh@116 -- # sync 00:17:22.181 10:22:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:22.181 10:22:35 -- nvmf/common.sh@119 -- # set +e 00:17:22.181 10:22:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:22.181 10:22:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:22.181 rmmod nvme_tcp 00:17:22.181 rmmod nvme_fabrics 00:17:22.181 rmmod nvme_keyring 00:17:22.181 10:22:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:22.181 10:22:35 -- nvmf/common.sh@123 -- # set -e 00:17:22.181 10:22:35 -- nvmf/common.sh@124 -- # return 0 00:17:22.181 10:22:35 -- nvmf/common.sh@477 -- # '[' -n 82715 ']' 00:17:22.181 10:22:35 -- nvmf/common.sh@478 -- # killprocess 82715 00:17:22.181 10:22:35 -- common/autotest_common.sh@926 -- # '[' -z 82715 ']' 00:17:22.181 10:22:35 -- common/autotest_common.sh@930 -- # kill -0 82715 00:17:22.181 10:22:35 -- common/autotest_common.sh@931 -- # uname 00:17:22.181 10:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.181 10:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82715 00:17:22.181 killing process with pid 82715 00:17:22.181 10:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.181 10:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.181 10:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82715' 00:17:22.181 10:22:35 -- common/autotest_common.sh@945 -- # kill 82715 00:17:22.181 10:22:35 -- common/autotest_common.sh@950 -- # wait 82715 00:17:22.439 10:22:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:22.439 10:22:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:22.439 10:22:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:22.439 10:22:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.439 10:22:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:22.439 10:22:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.439 10:22:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.439 10:22:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.439 10:22:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:22.439 ************************************ 00:17:22.439 END TEST nvmf_discovery_remove_ifc 00:17:22.439 ************************************ 00:17:22.439 00:17:22.439 real 0m15.165s 00:17:22.439 user 0m24.415s 00:17:22.439 sys 0m2.501s 00:17:22.439 10:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.439 10:22:35 -- common/autotest_common.sh@10 -- # set +x 00:17:22.439 10:22:35 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:22.439 10:22:35 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:22.439 10:22:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:22.439 10:22:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.439 10:22:35 -- common/autotest_common.sh@10 -- # set +x 00:17:22.439 ************************************ 00:17:22.439 START TEST nvmf_digest 00:17:22.439 ************************************ 00:17:22.439 10:22:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:22.439 * Looking for test storage... 00:17:22.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.439 10:22:35 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.439 10:22:35 -- nvmf/common.sh@7 -- # uname -s 00:17:22.439 10:22:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.439 10:22:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.439 10:22:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.439 10:22:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.439 10:22:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.439 10:22:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.439 10:22:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.439 10:22:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.439 10:22:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.439 10:22:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.439 10:22:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:17:22.439 10:22:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:17:22.439 10:22:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.439 10:22:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.439 10:22:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.439 10:22:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.439 10:22:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.439 10:22:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.439 10:22:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.439 10:22:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.439 10:22:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.440 10:22:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.440 10:22:35 -- paths/export.sh@5 -- # export PATH 00:17:22.440 10:22:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.440 10:22:35 -- nvmf/common.sh@46 -- # : 0 00:17:22.440 10:22:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:22.440 10:22:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:22.440 10:22:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:22.440 10:22:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.440 10:22:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.440 10:22:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:22.440 10:22:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:22.440 10:22:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:22.440 10:22:35 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:22.440 10:22:35 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:22.440 10:22:35 -- host/digest.sh@16 -- # runtime=2 00:17:22.440 10:22:35 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:22.440 10:22:35 -- host/digest.sh@132 -- # nvmftestinit 00:17:22.440 10:22:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:22.440 10:22:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.440 10:22:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:22.440 10:22:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:22.440 10:22:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:22.440 10:22:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.440 10:22:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.440 10:22:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.440 10:22:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:22.440 10:22:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:22.440 10:22:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:22.440 10:22:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:22.440 10:22:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:22.440 10:22:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:22.440 10:22:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.440 10:22:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.440 10:22:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:22.440 10:22:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:22.440 10:22:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.440 10:22:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.440 10:22:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.440 10:22:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.440 10:22:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.440 10:22:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.440 10:22:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.440 10:22:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.440 10:22:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:22.440 10:22:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:22.700 Cannot find device "nvmf_tgt_br" 00:17:22.700 10:22:35 -- nvmf/common.sh@154 -- # true 00:17:22.700 10:22:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.700 Cannot find device "nvmf_tgt_br2" 00:17:22.700 10:22:35 -- nvmf/common.sh@155 -- # true 00:17:22.700 10:22:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:22.700 10:22:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:22.700 Cannot find device "nvmf_tgt_br" 00:17:22.700 10:22:35 -- nvmf/common.sh@157 -- # true 00:17:22.700 10:22:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:22.700 Cannot find device "nvmf_tgt_br2" 00:17:22.700 10:22:35 -- nvmf/common.sh@158 -- # true 00:17:22.700 10:22:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:22.700 10:22:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:22.700 10:22:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.700 10:22:36 -- nvmf/common.sh@161 -- # true 00:17:22.700 10:22:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.700 10:22:36 -- nvmf/common.sh@162 -- # true 00:17:22.700 10:22:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.700 10:22:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.700 10:22:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.701 10:22:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.701 10:22:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.701 10:22:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.701 10:22:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.701 10:22:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.701 10:22:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.701 10:22:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:22.701 10:22:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:22.701 10:22:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:22.701 10:22:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:22.701 10:22:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.701 10:22:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.701 10:22:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.701 10:22:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:22.701 10:22:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:22.701 10:22:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.701 10:22:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.701 10:22:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.959 10:22:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.959 10:22:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.959 10:22:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:22.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:22.959 00:17:22.959 --- 10.0.0.2 ping statistics --- 00:17:22.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.959 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:22.959 10:22:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:22.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:22.959 00:17:22.959 --- 10.0.0.3 ping statistics --- 00:17:22.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.959 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:22.959 10:22:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:22.959 00:17:22.959 --- 10.0.0.1 ping statistics --- 00:17:22.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.959 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:22.959 10:22:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.959 10:22:36 -- nvmf/common.sh@421 -- # return 0 00:17:22.959 10:22:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.959 10:22:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.959 10:22:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:22.959 10:22:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:22.959 10:22:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.959 10:22:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:22.959 10:22:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:22.959 10:22:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:22.959 10:22:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:22.959 10:22:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:22.959 10:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.959 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:17:22.959 ************************************ 00:17:22.959 START TEST nvmf_digest_clean 00:17:22.959 ************************************ 00:17:22.959 10:22:36 -- common/autotest_common.sh@1104 -- # run_digest 00:17:22.959 10:22:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:22.959 10:22:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.959 10:22:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.959 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:17:22.959 10:22:36 -- nvmf/common.sh@469 -- # nvmfpid=83163 00:17:22.959 10:22:36 -- nvmf/common.sh@470 -- # waitforlisten 83163 00:17:22.959 10:22:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:22.959 10:22:36 -- common/autotest_common.sh@819 -- # '[' -z 83163 ']' 00:17:22.959 10:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.959 10:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.959 10:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.959 10:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.959 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:17:22.959 [2024-07-26 10:22:36.278934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:22.959 [2024-07-26 10:22:36.279039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.217 [2024-07-26 10:22:36.417026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.217 [2024-07-26 10:22:36.512903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.217 [2024-07-26 10:22:36.513074] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.217 [2024-07-26 10:22:36.513090] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.217 [2024-07-26 10:22:36.513101] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.217 [2024-07-26 10:22:36.513132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.151 10:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.151 10:22:37 -- common/autotest_common.sh@852 -- # return 0 00:17:24.151 10:22:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:24.151 10:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:24.151 10:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:24.151 10:22:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.151 10:22:37 -- host/digest.sh@120 -- # common_target_config 00:17:24.151 10:22:37 -- host/digest.sh@43 -- # rpc_cmd 00:17:24.151 10:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.151 10:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:24.151 null0 00:17:24.151 [2024-07-26 10:22:37.397296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.151 [2024-07-26 10:22:37.421419] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.151 10:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.151 10:22:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:24.151 10:22:37 -- host/digest.sh@77 -- # local rw bs qd 00:17:24.151 10:22:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:24.151 10:22:37 -- host/digest.sh@80 -- # rw=randread 00:17:24.151 10:22:37 -- host/digest.sh@80 -- # bs=4096 00:17:24.151 10:22:37 -- host/digest.sh@80 -- # qd=128 00:17:24.151 10:22:37 -- host/digest.sh@82 -- # bperfpid=83195 00:17:24.151 10:22:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:24.151 10:22:37 -- host/digest.sh@83 -- # waitforlisten 83195 /var/tmp/bperf.sock 00:17:24.151 10:22:37 -- common/autotest_common.sh@819 -- # '[' -z 83195 ']' 00:17:24.151 10:22:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.151 10:22:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.151 10:22:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.151 10:22:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.151 10:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:24.151 [2024-07-26 10:22:37.475189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:24.151 [2024-07-26 10:22:37.475532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83195 ] 00:17:24.409 [2024-07-26 10:22:37.615953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.409 [2024-07-26 10:22:37.712481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.357 10:22:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.357 10:22:38 -- common/autotest_common.sh@852 -- # return 0 00:17:25.357 10:22:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:25.357 10:22:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:25.357 10:22:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:25.649 10:22:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.649 10:22:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.907 nvme0n1 00:17:25.907 10:22:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:25.907 10:22:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:25.907 Running I/O for 2 seconds... 00:17:28.438 00:17:28.438 Latency(us) 00:17:28.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:28.438 nvme0n1 : 2.01 15221.62 59.46 0.00 0.00 8403.13 7804.74 22758.87 00:17:28.438 =================================================================================================================== 00:17:28.438 Total : 15221.62 59.46 0.00 0.00 8403.13 7804.74 22758.87 00:17:28.438 0 00:17:28.438 10:22:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:28.438 10:22:41 -- host/digest.sh@92 -- # get_accel_stats 00:17:28.438 10:22:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:28.438 10:22:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:28.438 10:22:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:28.438 | select(.opcode=="crc32c") 00:17:28.438 | "\(.module_name) \(.executed)"' 00:17:28.438 10:22:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:28.438 10:22:41 -- host/digest.sh@93 -- # exp_module=software 00:17:28.438 10:22:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:28.438 10:22:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.438 10:22:41 -- host/digest.sh@97 -- # killprocess 83195 00:17:28.438 10:22:41 -- common/autotest_common.sh@926 -- # '[' -z 83195 ']' 00:17:28.438 10:22:41 -- common/autotest_common.sh@930 -- # kill -0 83195 00:17:28.438 10:22:41 -- common/autotest_common.sh@931 -- # uname 00:17:28.438 10:22:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.438 10:22:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83195 00:17:28.438 killing process with pid 83195 00:17:28.438 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.438 00:17:28.438 Latency(us) 00:17:28.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.438 =================================================================================================================== 00:17:28.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.438 10:22:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:28.438 10:22:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:28.438 10:22:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83195' 00:17:28.438 10:22:41 -- common/autotest_common.sh@945 -- # kill 83195 00:17:28.438 10:22:41 -- common/autotest_common.sh@950 -- # wait 83195 00:17:28.438 10:22:41 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:28.438 10:22:41 -- host/digest.sh@77 -- # local rw bs qd 00:17:28.438 10:22:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:28.438 10:22:41 -- host/digest.sh@80 -- # rw=randread 00:17:28.438 10:22:41 -- host/digest.sh@80 -- # bs=131072 00:17:28.438 10:22:41 -- host/digest.sh@80 -- # qd=16 00:17:28.438 10:22:41 -- host/digest.sh@82 -- # bperfpid=83255 00:17:28.438 10:22:41 -- host/digest.sh@83 -- # waitforlisten 83255 /var/tmp/bperf.sock 00:17:28.438 10:22:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:28.438 10:22:41 -- common/autotest_common.sh@819 -- # '[' -z 83255 ']' 00:17:28.438 10:22:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.438 10:22:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.438 10:22:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.438 10:22:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.438 10:22:41 -- common/autotest_common.sh@10 -- # set +x 00:17:28.438 [2024-07-26 10:22:41.838829] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:28.438 [2024-07-26 10:22:41.839187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83255 ] 00:17:28.438 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:28.438 Zero copy mechanism will not be used. 00:17:28.697 [2024-07-26 10:22:41.976864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.697 [2024-07-26 10:22:42.067305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.630 10:22:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.630 10:22:42 -- common/autotest_common.sh@852 -- # return 0 00:17:29.630 10:22:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:29.630 10:22:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:29.630 10:22:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:29.889 10:22:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.889 10:22:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.147 nvme0n1 00:17:30.147 10:22:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:30.147 10:22:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:30.147 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:30.147 Zero copy mechanism will not be used. 00:17:30.147 Running I/O for 2 seconds... 00:17:32.117 00:17:32.117 Latency(us) 00:17:32.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:32.117 nvme0n1 : 2.00 7670.22 958.78 0.00 0.00 2082.69 1861.82 5898.24 00:17:32.117 =================================================================================================================== 00:17:32.117 Total : 7670.22 958.78 0.00 0.00 2082.69 1861.82 5898.24 00:17:32.117 0 00:17:32.117 10:22:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:32.117 10:22:45 -- host/digest.sh@92 -- # get_accel_stats 00:17:32.117 10:22:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:32.117 10:22:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:32.117 10:22:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:32.117 | select(.opcode=="crc32c") 00:17:32.117 | "\(.module_name) \(.executed)"' 00:17:32.375 10:22:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:32.375 10:22:45 -- host/digest.sh@93 -- # exp_module=software 00:17:32.375 10:22:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:32.375 10:22:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.375 10:22:45 -- host/digest.sh@97 -- # killprocess 83255 00:17:32.375 10:22:45 -- common/autotest_common.sh@926 -- # '[' -z 83255 ']' 00:17:32.375 10:22:45 -- common/autotest_common.sh@930 -- # kill -0 83255 00:17:32.375 10:22:45 -- common/autotest_common.sh@931 -- # uname 00:17:32.375 10:22:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.375 10:22:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83255 00:17:32.375 killing process with pid 83255 00:17:32.375 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.375 00:17:32.375 Latency(us) 00:17:32.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.375 =================================================================================================================== 00:17:32.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.375 10:22:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.375 10:22:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.375 10:22:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83255' 00:17:32.375 10:22:45 -- common/autotest_common.sh@945 -- # kill 83255 00:17:32.375 10:22:45 -- common/autotest_common.sh@950 -- # wait 83255 00:17:32.634 10:22:46 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:32.634 10:22:46 -- host/digest.sh@77 -- # local rw bs qd 00:17:32.634 10:22:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:32.634 10:22:46 -- host/digest.sh@80 -- # rw=randwrite 00:17:32.634 10:22:46 -- host/digest.sh@80 -- # bs=4096 00:17:32.634 10:22:46 -- host/digest.sh@80 -- # qd=128 00:17:32.634 10:22:46 -- host/digest.sh@82 -- # bperfpid=83310 00:17:32.634 10:22:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:32.634 10:22:46 -- host/digest.sh@83 -- # waitforlisten 83310 /var/tmp/bperf.sock 00:17:32.634 10:22:46 -- common/autotest_common.sh@819 -- # '[' -z 83310 ']' 00:17:32.634 10:22:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.634 10:22:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.634 10:22:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.634 10:22:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.634 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:17:32.634 [2024-07-26 10:22:46.062642] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:32.634 [2024-07-26 10:22:46.062929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83310 ] 00:17:32.892 [2024-07-26 10:22:46.200666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.892 [2024-07-26 10:22:46.285669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.826 10:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:33.826 10:22:46 -- common/autotest_common.sh@852 -- # return 0 00:17:33.826 10:22:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:33.826 10:22:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:33.826 10:22:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:33.826 10:22:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.826 10:22:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.392 nvme0n1 00:17:34.392 10:22:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:34.392 10:22:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:34.392 Running I/O for 2 seconds... 00:17:36.294 00:17:36.294 Latency(us) 00:17:36.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.294 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.294 nvme0n1 : 2.01 15709.67 61.37 0.00 0.00 8140.83 6642.97 15847.80 00:17:36.294 =================================================================================================================== 00:17:36.294 Total : 15709.67 61.37 0.00 0.00 8140.83 6642.97 15847.80 00:17:36.294 0 00:17:36.294 10:22:49 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:36.294 10:22:49 -- host/digest.sh@92 -- # get_accel_stats 00:17:36.294 10:22:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:36.294 10:22:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:36.294 | select(.opcode=="crc32c") 00:17:36.294 | "\(.module_name) \(.executed)"' 00:17:36.294 10:22:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:36.553 10:22:49 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:36.553 10:22:49 -- host/digest.sh@93 -- # exp_module=software 00:17:36.553 10:22:49 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:36.553 10:22:49 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:36.553 10:22:49 -- host/digest.sh@97 -- # killprocess 83310 00:17:36.553 10:22:49 -- common/autotest_common.sh@926 -- # '[' -z 83310 ']' 00:17:36.553 10:22:49 -- common/autotest_common.sh@930 -- # kill -0 83310 00:17:36.553 10:22:49 -- common/autotest_common.sh@931 -- # uname 00:17:36.553 10:22:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.553 10:22:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83310 00:17:36.553 killing process with pid 83310 00:17:36.553 Received shutdown signal, test time was about 2.000000 seconds 00:17:36.553 00:17:36.553 Latency(us) 00:17:36.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.553 =================================================================================================================== 00:17:36.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.553 10:22:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:36.553 10:22:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:36.553 10:22:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83310' 00:17:36.553 10:22:49 -- common/autotest_common.sh@945 -- # kill 83310 00:17:36.553 10:22:49 -- common/autotest_common.sh@950 -- # wait 83310 00:17:36.812 10:22:50 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:36.812 10:22:50 -- host/digest.sh@77 -- # local rw bs qd 00:17:36.812 10:22:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:36.812 10:22:50 -- host/digest.sh@80 -- # rw=randwrite 00:17:36.812 10:22:50 -- host/digest.sh@80 -- # bs=131072 00:17:36.812 10:22:50 -- host/digest.sh@80 -- # qd=16 00:17:36.812 10:22:50 -- host/digest.sh@82 -- # bperfpid=83370 00:17:36.812 10:22:50 -- host/digest.sh@83 -- # waitforlisten 83370 /var/tmp/bperf.sock 00:17:36.812 10:22:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:36.812 10:22:50 -- common/autotest_common.sh@819 -- # '[' -z 83370 ']' 00:17:36.812 10:22:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.812 10:22:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.812 10:22:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.812 10:22:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.812 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.812 [2024-07-26 10:22:50.231956] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:36.812 [2024-07-26 10:22:50.232250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83370 ] 00:17:36.812 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.812 Zero copy mechanism will not be used. 00:17:37.071 [2024-07-26 10:22:50.373175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.071 [2024-07-26 10:22:50.460089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.007 10:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.007 10:22:51 -- common/autotest_common.sh@852 -- # return 0 00:17:38.007 10:22:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:38.007 10:22:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:38.007 10:22:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:38.266 10:22:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.266 10:22:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.525 nvme0n1 00:17:38.525 10:22:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:38.525 10:22:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:38.525 Zero copy mechanism will not be used. 00:17:38.525 Running I/O for 2 seconds... 00:17:40.426 00:17:40.426 Latency(us) 00:17:40.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.426 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:40.426 nvme0n1 : 2.00 5830.31 728.79 0.00 0.00 2738.86 2100.13 6583.39 00:17:40.426 =================================================================================================================== 00:17:40.426 Total : 5830.31 728.79 0.00 0.00 2738.86 2100.13 6583.39 00:17:40.426 0 00:17:40.426 10:22:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:40.426 10:22:53 -- host/digest.sh@92 -- # get_accel_stats 00:17:40.426 10:22:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:40.426 10:22:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:40.426 | select(.opcode=="crc32c") 00:17:40.426 | "\(.module_name) \(.executed)"' 00:17:40.426 10:22:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:40.685 10:22:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:40.685 10:22:54 -- host/digest.sh@93 -- # exp_module=software 00:17:40.686 10:22:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:40.686 10:22:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:40.686 10:22:54 -- host/digest.sh@97 -- # killprocess 83370 00:17:40.686 10:22:54 -- common/autotest_common.sh@926 -- # '[' -z 83370 ']' 00:17:40.686 10:22:54 -- common/autotest_common.sh@930 -- # kill -0 83370 00:17:40.686 10:22:54 -- common/autotest_common.sh@931 -- # uname 00:17:40.686 10:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:40.686 10:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83370 00:17:40.686 killing process with pid 83370 00:17:40.686 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.686 00:17:40.686 Latency(us) 00:17:40.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.686 =================================================================================================================== 00:17:40.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.686 10:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:40.686 10:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:40.686 10:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83370' 00:17:40.686 10:22:54 -- common/autotest_common.sh@945 -- # kill 83370 00:17:40.686 10:22:54 -- common/autotest_common.sh@950 -- # wait 83370 00:17:40.945 10:22:54 -- host/digest.sh@126 -- # killprocess 83163 00:17:40.945 10:22:54 -- common/autotest_common.sh@926 -- # '[' -z 83163 ']' 00:17:40.945 10:22:54 -- common/autotest_common.sh@930 -- # kill -0 83163 00:17:40.945 10:22:54 -- common/autotest_common.sh@931 -- # uname 00:17:40.945 10:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:40.945 10:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83163 00:17:40.945 killing process with pid 83163 00:17:40.945 10:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:40.945 10:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:40.945 10:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83163' 00:17:40.945 10:22:54 -- common/autotest_common.sh@945 -- # kill 83163 00:17:40.945 10:22:54 -- common/autotest_common.sh@950 -- # wait 83163 00:17:41.204 ************************************ 00:17:41.204 END TEST nvmf_digest_clean 00:17:41.204 ************************************ 00:17:41.204 00:17:41.204 real 0m18.339s 00:17:41.204 user 0m35.159s 00:17:41.204 sys 0m4.796s 00:17:41.204 10:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:41.204 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.204 10:22:54 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:41.204 10:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:41.205 10:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:41.205 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.205 ************************************ 00:17:41.205 START TEST nvmf_digest_error 00:17:41.205 ************************************ 00:17:41.205 10:22:54 -- common/autotest_common.sh@1104 -- # run_digest_error 00:17:41.205 10:22:54 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:41.205 10:22:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.205 10:22:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:41.205 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.205 10:22:54 -- nvmf/common.sh@469 -- # nvmfpid=83459 00:17:41.205 10:22:54 -- nvmf/common.sh@470 -- # waitforlisten 83459 00:17:41.205 10:22:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:41.205 10:22:54 -- common/autotest_common.sh@819 -- # '[' -z 83459 ']' 00:17:41.205 10:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.205 10:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.205 10:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.205 10:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.205 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 [2024-07-26 10:22:54.673654] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:41.464 [2024-07-26 10:22:54.673964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.464 [2024-07-26 10:22:54.813100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.464 [2024-07-26 10:22:54.886724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.464 [2024-07-26 10:22:54.886866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.464 [2024-07-26 10:22:54.886882] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.464 [2024-07-26 10:22:54.886891] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.464 [2024-07-26 10:22:54.886919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.401 10:22:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.401 10:22:55 -- common/autotest_common.sh@852 -- # return 0 00:17:42.401 10:22:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.401 10:22:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:42.401 10:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.401 10:22:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.401 10:22:55 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:42.401 10:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.401 10:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.401 [2024-07-26 10:22:55.631408] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:42.401 10:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.401 10:22:55 -- host/digest.sh@104 -- # common_target_config 00:17:42.401 10:22:55 -- host/digest.sh@43 -- # rpc_cmd 00:17:42.401 10:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.401 10:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.401 null0 00:17:42.401 [2024-07-26 10:22:55.737631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.401 [2024-07-26 10:22:55.761759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.401 10:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.401 10:22:55 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:42.401 10:22:55 -- host/digest.sh@54 -- # local rw bs qd 00:17:42.401 10:22:55 -- host/digest.sh@56 -- # rw=randread 00:17:42.401 10:22:55 -- host/digest.sh@56 -- # bs=4096 00:17:42.401 10:22:55 -- host/digest.sh@56 -- # qd=128 00:17:42.401 10:22:55 -- host/digest.sh@58 -- # bperfpid=83491 00:17:42.401 10:22:55 -- host/digest.sh@60 -- # waitforlisten 83491 /var/tmp/bperf.sock 00:17:42.401 10:22:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:42.401 10:22:55 -- common/autotest_common.sh@819 -- # '[' -z 83491 ']' 00:17:42.401 10:22:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.401 10:22:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.401 10:22:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.401 10:22:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.401 10:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.401 [2024-07-26 10:22:55.814404] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:42.401 [2024-07-26 10:22:55.814706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83491 ] 00:17:42.660 [2024-07-26 10:22:55.952424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.660 [2024-07-26 10:22:56.032719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.597 10:22:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.597 10:22:56 -- common/autotest_common.sh@852 -- # return 0 00:17:43.597 10:22:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.597 10:22:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.597 10:22:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:43.597 10:22:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.597 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:17:43.856 10:22:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.856 10:22:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.856 10:22:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.115 nvme0n1 00:17:44.115 10:22:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:44.115 10:22:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.115 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.115 10:22:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.115 10:22:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:44.115 10:22:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.115 Running I/O for 2 seconds... 00:17:44.115 [2024-07-26 10:22:57.494717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.115 [2024-07-26 10:22:57.494766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.115 [2024-07-26 10:22:57.494798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.115 [2024-07-26 10:22:57.512928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.115 [2024-07-26 10:22:57.512968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.115 [2024-07-26 10:22:57.512999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.115 [2024-07-26 10:22:57.531804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.115 [2024-07-26 10:22:57.531855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.115 [2024-07-26 10:22:57.531870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.115 [2024-07-26 10:22:57.549548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.115 [2024-07-26 10:22:57.549618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.115 [2024-07-26 10:22:57.549650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.115 [2024-07-26 10:22:57.568129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.115 [2024-07-26 10:22:57.568170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.115 [2024-07-26 10:22:57.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.586160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.586205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.586220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.604111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.604154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.604168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.621624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.621689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.621720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.637705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.637743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.637772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.654156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.654198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.671551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.671619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.671651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.689849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.689905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.708560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.708634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.728979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.729042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.729065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.747117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.747163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.747178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.764757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.764813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.764843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.782691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.782783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.801878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.801919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.801950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.375 [2024-07-26 10:22:57.819798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.375 [2024-07-26 10:22:57.819834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.375 [2024-07-26 10:22:57.819847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.837305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.837376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.837390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.854756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.854808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.854821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.872543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.872614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.872627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.891273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.891372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.911655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.911746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.911766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.930476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.930530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.930543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.948569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.948639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.948652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.967062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.967101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.967115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:57.984840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:57.984903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:57.984916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.002832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.002884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.002896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.020600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.020661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.020673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.036631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.036700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.051631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.051727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.069227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.069296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.635 [2024-07-26 10:22:58.087243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.635 [2024-07-26 10:22:58.087283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.635 [2024-07-26 10:22:58.087297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.894 [2024-07-26 10:22:58.104871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.894 [2024-07-26 10:22:58.104926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.894 [2024-07-26 10:22:58.104940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.894 [2024-07-26 10:22:58.122998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.894 [2024-07-26 10:22:58.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.894 [2024-07-26 10:22:58.123075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.894 [2024-07-26 10:22:58.142279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.894 [2024-07-26 10:22:58.142335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.894 [2024-07-26 10:22:58.142349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.894 [2024-07-26 10:22:58.160268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.894 [2024-07-26 10:22:58.160306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.894 [2024-07-26 10:22:58.160319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.178141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.178181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.178195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.196784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.196838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.196851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.215943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.215982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.215995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.233667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.233729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.233743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.250458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.250508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.250520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.267701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.267747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.267760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.285361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.285410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.285422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.303293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.303333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.303347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.322757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.322807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.322819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.895 [2024-07-26 10:22:58.341616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:44.895 [2024-07-26 10:22:58.341680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.895 [2024-07-26 10:22:58.341693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.360042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.360080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.360094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.377725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.377776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.377789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.395771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.395808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.395821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.413894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.413950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.431707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.431744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.431758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.449786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.449819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.449832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.467491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.467540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.467552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.486120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.486160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.486174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.504043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.504091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.504111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.522238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.522276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.522290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.539808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.539847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.539860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.558446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.558515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.558552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.577382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.577465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.577492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.154 [2024-07-26 10:22:58.596148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.154 [2024-07-26 10:22:58.596187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.154 [2024-07-26 10:22:58.596201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.614271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.614310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.614323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.640392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.640443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.640455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.657983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.658035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.658048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.674434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.674482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.674494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.692439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.692509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.692522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.711851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.711892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.711906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.730156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.730203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.730216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.748629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.748679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.748707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.765946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.765998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.766050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.784781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.784820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.784834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.804159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.804199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.804213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.821735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.821800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.821812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.839866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.414 [2024-07-26 10:22:58.857871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.414 [2024-07-26 10:22:58.857923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.414 [2024-07-26 10:22:58.857935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.876120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.876159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.876173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.894239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.894278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.894291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.913328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.913366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.913380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.932571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.932649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.932663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.950401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.950454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.950482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.967569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.967627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:58.984870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:58.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:58.984918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.003972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.004044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.004060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.024519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.024600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.024622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.042893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.042945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.042959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.061046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.061083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.079843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.079880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.079902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.098148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.098186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.098199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.674 [2024-07-26 10:22:59.117083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.674 [2024-07-26 10:22:59.117121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-26 10:22:59.117135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.933 [2024-07-26 10:22:59.136068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.136130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.136153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.154590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.154670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.154683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.173479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.173532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.173545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.192300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.192338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.192367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.211378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.211461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.211473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.231539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.231623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.231636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.248775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.248822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.248834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.265169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.265218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.265230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.280910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.280958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.280970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.296971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.297020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.314728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.314779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.314791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.332398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.332450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.332462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.349329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.349391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.365589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.365646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.365659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-26 10:22:59.383064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:45.934 [2024-07-26 10:22:59.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-26 10:22:59.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.193 [2024-07-26 10:22:59.401853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:46.193 [2024-07-26 10:22:59.401903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.193 [2024-07-26 10:22:59.401914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.193 [2024-07-26 10:22:59.418384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:46.193 [2024-07-26 10:22:59.418431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.193 [2024-07-26 10:22:59.418443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.193 [2024-07-26 10:22:59.434215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:46.193 [2024-07-26 10:22:59.434262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.193 [2024-07-26 10:22:59.434274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.193 [2024-07-26 10:22:59.450921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:46.193 [2024-07-26 10:22:59.451001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.194 [2024-07-26 10:22:59.451022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.194 [2024-07-26 10:22:59.469328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17303a0) 00:17:46.194 [2024-07-26 10:22:59.469381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.194 [2024-07-26 10:22:59.469424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.194 00:17:46.194 Latency(us) 00:17:46.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:46.194 nvme0n1 : 2.01 14020.75 54.77 0.00 0.00 9122.86 7149.38 34555.35 00:17:46.194 =================================================================================================================== 00:17:46.194 Total : 14020.75 54.77 0.00 0.00 9122.86 7149.38 34555.35 00:17:46.194 0 00:17:46.194 10:22:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:46.194 10:22:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:46.194 10:22:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:46.194 10:22:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:46.194 | .driver_specific 00:17:46.194 | .nvme_error 00:17:46.194 | .status_code 00:17:46.194 | .command_transient_transport_error' 00:17:46.453 10:22:59 -- host/digest.sh@71 -- # (( 110 > 0 )) 00:17:46.453 10:22:59 -- host/digest.sh@73 -- # killprocess 83491 00:17:46.453 10:22:59 -- common/autotest_common.sh@926 -- # '[' -z 83491 ']' 00:17:46.453 10:22:59 -- common/autotest_common.sh@930 -- # kill -0 83491 00:17:46.453 10:22:59 -- common/autotest_common.sh@931 -- # uname 00:17:46.453 10:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.453 10:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83491 00:17:46.453 10:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:46.453 killing process with pid 83491 00:17:46.453 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.453 00:17:46.453 Latency(us) 00:17:46.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.453 =================================================================================================================== 00:17:46.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.453 10:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:46.453 10:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83491' 00:17:46.453 10:22:59 -- common/autotest_common.sh@945 -- # kill 83491 00:17:46.453 10:22:59 -- common/autotest_common.sh@950 -- # wait 83491 00:17:46.712 10:22:59 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:46.712 10:22:59 -- host/digest.sh@54 -- # local rw bs qd 00:17:46.712 10:22:59 -- host/digest.sh@56 -- # rw=randread 00:17:46.712 10:22:59 -- host/digest.sh@56 -- # bs=131072 00:17:46.712 10:22:59 -- host/digest.sh@56 -- # qd=16 00:17:46.712 10:22:59 -- host/digest.sh@58 -- # bperfpid=83546 00:17:46.712 10:22:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:46.712 10:22:59 -- host/digest.sh@60 -- # waitforlisten 83546 /var/tmp/bperf.sock 00:17:46.712 10:22:59 -- common/autotest_common.sh@819 -- # '[' -z 83546 ']' 00:17:46.712 10:22:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.712 10:22:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:46.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.712 10:22:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.712 10:22:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:46.712 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:17:46.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.712 Zero copy mechanism will not be used. 00:17:46.712 [2024-07-26 10:23:00.037492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:46.712 [2024-07-26 10:23:00.037562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83546 ] 00:17:46.971 [2024-07-26 10:23:00.174191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.971 [2024-07-26 10:23:00.261700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.538 10:23:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:47.538 10:23:00 -- common/autotest_common.sh@852 -- # return 0 00:17:47.538 10:23:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:47.538 10:23:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.106 10:23:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:48.106 10:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:48.106 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:17:48.107 10:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:48.107 10:23:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.107 10:23:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.107 nvme0n1 00:17:48.107 10:23:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:48.107 10:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:48.107 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:17:48.107 10:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:48.107 10:23:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:48.107 10:23:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.366 Zero copy mechanism will not be used. 00:17:48.366 Running I/O for 2 seconds... 00:17:48.366 [2024-07-26 10:23:01.685745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.366 [2024-07-26 10:23:01.686025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.366 [2024-07-26 10:23:01.686167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.366 [2024-07-26 10:23:01.690625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.366 [2024-07-26 10:23:01.690831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.366 [2024-07-26 10:23:01.691000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.366 [2024-07-26 10:23:01.695410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.695452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.695483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.699950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.700020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.700050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.704155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.704209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.704254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.708444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.708481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.708525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.712568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.712650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.712664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.716711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.716748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.716777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.720851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.720890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.725009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.725044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.725073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.729058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.729123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.733118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.733175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.733204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.737370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.737407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.737420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.741518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.741605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.741619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.745693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.745728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.745756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.749841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.749876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.753801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.753835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.753864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.757808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.757844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.761871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.761905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.761934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.766154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.766225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.770224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.770261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.774328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.774364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.774393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.778391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.778428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.778456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.782455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.782492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.782521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.786627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.786662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.786691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.790845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.790879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.790908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.795420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.795457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.799755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.799799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.367 [2024-07-26 10:23:01.799813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.367 [2024-07-26 10:23:01.804263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.367 [2024-07-26 10:23:01.804302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.368 [2024-07-26 10:23:01.804332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.368 [2024-07-26 10:23:01.808858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.368 [2024-07-26 10:23:01.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.368 [2024-07-26 10:23:01.808911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.368 [2024-07-26 10:23:01.813315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.368 [2024-07-26 10:23:01.813351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.368 [2024-07-26 10:23:01.813380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.368 [2024-07-26 10:23:01.817959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.368 [2024-07-26 10:23:01.818000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.368 [2024-07-26 10:23:01.818014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.628 [2024-07-26 10:23:01.822540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.628 [2024-07-26 10:23:01.822620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.628 [2024-07-26 10:23:01.822634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.628 [2024-07-26 10:23:01.827014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.628 [2024-07-26 10:23:01.827052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.628 [2024-07-26 10:23:01.827083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.831388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.831423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.831452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.835820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.835859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.835872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.840074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.840128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.840141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.844635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.844682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.844695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.849019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.849070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.849082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.853613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.853671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.853682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.858042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.858093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.858106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.862451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.862498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.862510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.866640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.866697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.866709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.870920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.870981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.875419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.875469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.875481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.879815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.879849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.879861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.884225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.884273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.884285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.888426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.888475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.888486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.892681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.892729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.892741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.896911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.896961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.896973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.901122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.901170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.901198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.905461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.905509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.905521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.909657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.909706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.909717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.913780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.913827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.913838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.918202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.918266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.922620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.922679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.922692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.927188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.927252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.927263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.931739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.931785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.936281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.936329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.940717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.629 [2024-07-26 10:23:01.940751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.629 [2024-07-26 10:23:01.940763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.629 [2024-07-26 10:23:01.945241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.945275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.945288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.949654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.949699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.949711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.953990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.954023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.958351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.958399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.958410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.962734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.962783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.962795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.966948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.966997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.967009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.971307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.971353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.971364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.975466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.975512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.975524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.979478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.979524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.979535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.983720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.983766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.988020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.988053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.988064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.992361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.992410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.992422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:01.996531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:01.996580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:01.996603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.000623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.000682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.000694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.004952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.005002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.005014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.009271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.009320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.009331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.013606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.017793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.017840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.021975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.022022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.022033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.026094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.026142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.026153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.030204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.030252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.030263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.034397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.034445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.034457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.038401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.038449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.042431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.042479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.042491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.046409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.046455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.046466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.050342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.050389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.630 [2024-07-26 10:23:02.050400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.630 [2024-07-26 10:23:02.054411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.630 [2024-07-26 10:23:02.054457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.054468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.058489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.058536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.058547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.062453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.062512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.066444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.066491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.066502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.070437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.070485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.070496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.074484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.078463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.078513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.078524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.631 [2024-07-26 10:23:02.082448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.631 [2024-07-26 10:23:02.082497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.631 [2024-07-26 10:23:02.082508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.891 [2024-07-26 10:23:02.086290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.891 [2024-07-26 10:23:02.086320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.891 [2024-07-26 10:23:02.086348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.891 [2024-07-26 10:23:02.090222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.891 [2024-07-26 10:23:02.090270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.891 [2024-07-26 10:23:02.090282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.891 [2024-07-26 10:23:02.094313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.891 [2024-07-26 10:23:02.094360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.891 [2024-07-26 10:23:02.094371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.891 [2024-07-26 10:23:02.098268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.891 [2024-07-26 10:23:02.098317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.891 [2024-07-26 10:23:02.098328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.891 [2024-07-26 10:23:02.102353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.891 [2024-07-26 10:23:02.102401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.102412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.106338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.106386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.106397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.110323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.110370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.110381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.114259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.114305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.114332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.118324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.118370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.122452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.122499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.122510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.126607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.126664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.126676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.130775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.130820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.130830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.135039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.135086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.135098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.139151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.139197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.139225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.143335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.143382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.143394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.147405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.147452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.147463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.151812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.151852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.151864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.156229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.156279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.156291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.160559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.160627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.164990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.165040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.169264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.169324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.173563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.173651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.173663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.177791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.177855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.177867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.181845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.181892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.181904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.185802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.185866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.189826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.189857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.193887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.193935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.193946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.197882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.197930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.197942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.202015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.202070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.202083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.206303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.206354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.206366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.210366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.892 [2024-07-26 10:23:02.210413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.892 [2024-07-26 10:23:02.210425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.892 [2024-07-26 10:23:02.214448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.214497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.214509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.218628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.218688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.218703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.222876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.222911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.222924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.227056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.227103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.227115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.231282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.231330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.235322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.235369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.235381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.239461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.239508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.239519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.243653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.243718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.243730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.247781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.247830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.247842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.251908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.251942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.251954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.256103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.256151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.256162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.260239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.260286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.260297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.264360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.264408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.264419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.268531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.268579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.268602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.272683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.272742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.276748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.276795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.276806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.280696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.280744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.280755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.284812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.284860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.284871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.288727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.288774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.288785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.292899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.292950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.292962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.297180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.297229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.297241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.301328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.301375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.301387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.305492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.305533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.305545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.309550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.309609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.309620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.313615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.313664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.313675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.317690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.317738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.317749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.321684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.893 [2024-07-26 10:23:02.321731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.893 [2024-07-26 10:23:02.321742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.893 [2024-07-26 10:23:02.325699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.894 [2024-07-26 10:23:02.325746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.894 [2024-07-26 10:23:02.325757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.894 [2024-07-26 10:23:02.329804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.894 [2024-07-26 10:23:02.329868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.894 [2024-07-26 10:23:02.329879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.894 [2024-07-26 10:23:02.333927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.894 [2024-07-26 10:23:02.333974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.894 [2024-07-26 10:23:02.333985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.894 [2024-07-26 10:23:02.338102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.894 [2024-07-26 10:23:02.338150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.894 [2024-07-26 10:23:02.338162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.894 [2024-07-26 10:23:02.342253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:48.894 [2024-07-26 10:23:02.342301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.894 [2024-07-26 10:23:02.342312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.346437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.346485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.346497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.350486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.350534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.350560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.354609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.354668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.358634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.358689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.358701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.362875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.362923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.362935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.367101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.367149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.367161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.371358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.371407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.371419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.375888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.375923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.375936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.380330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.380380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.380391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.384755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.384796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.389261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.389311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.154 [2024-07-26 10:23:02.389322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.154 [2024-07-26 10:23:02.393704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.154 [2024-07-26 10:23:02.393751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.393763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.398051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.398099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.398110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.402490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.402536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.402548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.406751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.406794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.410837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.410883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.410895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.414896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.414943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.414955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.418988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.419034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.423367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.423402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.423414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.427825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.427860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.427873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.432255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.432302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.432314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.436631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.436692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.436704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.440865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.440914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.440925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.444994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.445042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.445053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.449432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.449478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.449489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.453621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.453669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.453680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.457846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.457895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.457907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.461937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.461987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.461999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.466148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.466200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.466213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.470327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.470376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.470388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.474601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.474674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.478978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.479011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.479024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.483418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.483453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.483465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.487755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.487789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.487803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.492132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.492178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.492203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.496559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.496625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.496638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.500873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.500908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.155 [2024-07-26 10:23:02.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.155 [2024-07-26 10:23:02.505352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.155 [2024-07-26 10:23:02.505402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.505414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.509761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.509807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.509836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.514365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.514413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.514426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.518912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.518945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.518958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.523345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.523395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.523407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.527993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.528070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.528082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.532409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.532441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.532452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.536921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.536954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.536967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.541414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.541448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.541460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.546063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.546110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.546127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.550745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.550793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.550805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.555271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.555319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.555332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.559914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.559949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.559962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.564198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.564247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.564259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.568525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.568574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.568585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.572735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.572783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.572795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.577073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.577122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.577134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.581407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.581456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.581468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.585654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.585714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.589856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.589904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.589915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.594027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.594075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.594087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.598555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.598612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.598624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.602723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.602771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.602783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.156 [2024-07-26 10:23:02.606877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.156 [2024-07-26 10:23:02.606925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.156 [2024-07-26 10:23:02.606937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.610936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.610982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.615153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.615200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.615228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.619365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.619413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.619424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.623554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.623610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.623622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.627737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.627769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.627782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.631768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.631817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.631830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.636259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.636308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.640617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.640689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.644900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.644949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.644976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.649218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.649267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.649279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.653614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.653672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.658129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.658162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.658175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.662580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.662639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.662651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.666881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.666915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.666928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.671145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.671235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.675499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.417 [2024-07-26 10:23:02.675556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.417 [2024-07-26 10:23:02.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.417 [2024-07-26 10:23:02.679414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.679461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.679473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.683343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.683389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.683401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.687284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.687331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.687343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.691267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.691314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.691325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.695266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.695312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.695324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.699277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.699323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.699335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.703221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.707306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.707353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.707364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.711543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.711587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.715634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.715710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.715723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.719803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.719837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.723937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.723989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.724001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.728257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.728307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.732581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.732639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.732651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.736935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.736984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.736996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.741210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.741260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.741273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.745235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.745284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.745295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.749294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.749342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.749354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.753388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.753437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.753448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.757464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.757524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.761420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.761467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.761479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.765594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.765654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.765667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.769727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.769773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.769784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.773645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.773691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.773703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.777709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.777739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.777750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.781591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.781639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.781650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.418 [2024-07-26 10:23:02.785624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.418 [2024-07-26 10:23:02.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.418 [2024-07-26 10:23:02.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.789777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.789835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.793806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.793854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.793865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.797889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.797937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.797948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.801888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.801946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.805985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.806031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.806042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.810002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.810049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.813998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.814049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.814061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.818173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.822329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.822389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.826716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.826763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.826774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.830814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.830861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.830873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.834771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.834817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.834828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.838951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.839012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.843107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.843156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.843167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.847375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.847423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.851400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.851430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.851459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.855364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.855412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.855424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.859432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.859479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.863616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.419 [2024-07-26 10:23:02.867605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.419 [2024-07-26 10:23:02.867651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.419 [2024-07-26 10:23:02.867688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.871515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.871563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.871574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.875492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.875539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.875551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.879452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.879499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.879511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.883453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.883500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.883511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.887434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.887492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.891371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.891418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.891429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.895304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.895350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.895362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.899535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.899582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.899603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.903786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.903832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.908083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.908130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.908157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.912340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.912388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.912400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.916443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.916490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.916501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.920684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.920743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.920755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.925311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.925359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.925371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.929672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.929719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.929730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.934297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.934346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.934357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.938477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.938524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.938536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.942664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.942710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.942721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.947163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.947243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.947254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.951406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.951466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.955754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.955788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.955800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.960208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.960263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.960274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.964763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.964812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.964824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.969222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.969269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.680 [2024-07-26 10:23:02.969296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.680 [2024-07-26 10:23:02.973650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.680 [2024-07-26 10:23:02.973695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.973706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.978030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.978063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.978076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.982370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.982418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.982429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.986718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.986765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.986777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.991113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.991163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.991210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.995482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.995528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.995539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:02.999641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:02.999706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:02.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.003864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.003898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.003910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.008338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.008398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.012728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.012778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.012790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.017290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.017353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.017366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.021640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.021697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.021709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.025779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.025827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.025855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.029992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.030038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.030051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.034100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.034147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.034158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.038280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.038328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.038339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.042364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.042413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.042424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.046360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.046408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.050427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.050475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.050487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.054493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.054539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.054566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.058596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.058654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.058665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.062542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.062609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.062621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.066324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.066382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.070308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.070356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.681 [2024-07-26 10:23:03.070367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.681 [2024-07-26 10:23:03.074278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.681 [2024-07-26 10:23:03.074326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.074337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.078233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.078280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.082224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.082272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.082283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.086094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.086140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.086151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.090156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.090205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.090216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.094130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.094176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.094203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.098234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.098283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.098294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.102335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.102381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.102393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.106213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.106261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.106272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.110125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.110172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.110183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.114111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.114159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.114171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.118299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.118346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.118357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.122238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.122286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.122297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.126250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.126297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.126308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.682 [2024-07-26 10:23:03.130332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.682 [2024-07-26 10:23:03.130380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.682 [2024-07-26 10:23:03.130392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.945 [2024-07-26 10:23:03.134492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.945 [2024-07-26 10:23:03.134539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.945 [2024-07-26 10:23:03.134551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.945 [2024-07-26 10:23:03.138425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.945 [2024-07-26 10:23:03.138472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.945 [2024-07-26 10:23:03.138483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.142287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.142336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.142347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.146197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.146244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.146255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.150113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.150159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.150171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.154014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.154060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.157942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.157988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.158000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.161866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.161913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.161924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.165744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.165789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.165800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.169699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.169747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.169758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.173632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.173662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.173673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.177451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.177499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.177510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.181460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.181508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.181520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.185376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.185424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.185435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.189442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.189490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.189502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.193384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.193433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.946 [2024-07-26 10:23:03.193444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.946 [2024-07-26 10:23:03.197454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.946 [2024-07-26 10:23:03.197502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.197513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.201571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.201628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.201639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.205561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.205619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.205631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.209597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.209656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.209667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.213589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.213648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.213659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.217660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.217706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.221687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.221733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.221743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.225641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.225696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.225707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.229750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.229796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.229807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.233709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.233756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.233766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.237840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.237887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.237899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.241878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.241926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.241937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.245902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.245948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.245959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.250087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.250143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.254175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.254237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.947 [2024-07-26 10:23:03.254249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.947 [2024-07-26 10:23:03.258219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.947 [2024-07-26 10:23:03.258266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.258278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.262381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.262428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.262439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.266448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.266496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.266508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.270526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.270574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.270600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.274951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.274986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.274998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.279304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.279351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.279363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.283592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.283647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.283683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.288036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.288083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.288111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.292321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.292370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.292381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.296434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.296483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.296494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.300483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.300532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.300543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.304675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.304733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.304745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.308668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.308715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.308726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.312681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.312729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.312740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.316743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.316790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.316802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.320885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.320933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.948 [2024-07-26 10:23:03.325068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.948 [2024-07-26 10:23:03.325117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.948 [2024-07-26 10:23:03.325128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.329249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.329298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.329309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.333490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.333536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.333563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.337641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.337698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.337709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.341654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.341701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.341713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.345671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.345718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.345730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.349689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.349736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.353788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.353846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.357836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.357885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.357896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.362027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.362075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.362087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.366109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.366157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.366169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.370101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.370147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.370158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.374151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.374199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.374211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.378165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.378229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.378240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.382272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.382319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.382332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.386257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.386304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.386315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.390357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.390407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.390418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.949 [2024-07-26 10:23:03.394428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:49.949 [2024-07-26 10:23:03.394475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.949 [2024-07-26 10:23:03.394486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.398543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.398599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.398612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.402635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.402682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.402694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.406674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.406721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.406733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.410611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.410669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.414652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.414699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.414710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.418703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.418749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.418761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.422741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.422790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.422801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.426789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.426834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.426845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.430762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.430808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.430820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.434807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.434853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.434864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.438778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.212 [2024-07-26 10:23:03.438825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.212 [2024-07-26 10:23:03.438836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.212 [2024-07-26 10:23:03.442766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.442813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.442824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.446743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.446791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.450732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.450778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.450790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.454861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.454909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.454920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.458890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.458936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.458948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.463009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.463055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.463067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.467114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.467172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.467184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.471142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.471189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.471217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.475241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.475293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.475304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.479259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.479304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.479315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.483433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.483480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.483491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.487615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.487684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.487698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.491702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.491735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.491747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.495894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.495941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.500507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.500556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.500567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.504778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.509190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.509256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.509268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.513570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.513644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.513656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.517912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.517960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.522383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.522429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.522440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.526635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.526691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.526704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.530782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.530830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.530857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.535084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.535131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.535142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.539547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.539623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.544202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.544250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.544262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.213 [2024-07-26 10:23:03.548637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.213 [2024-07-26 10:23:03.548671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.213 [2024-07-26 10:23:03.548683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.553059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.553108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.553121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.557615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.557659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.557672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.561969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.562031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.562042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.566347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.566394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.566406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.570876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.570911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.570923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.575252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.575300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.575311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.579724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.579757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.579768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.584138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.584182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.584210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.588523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.588569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.588596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.592868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.592915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.592927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.597191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.597256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.597283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.601404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.601453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.605633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.605692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.605704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.609966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.610028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.610039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.614202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.614249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.614260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.618392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.618439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.618451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.622540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.622599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.622612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.626702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.626748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.626759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.630774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.630824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.630835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.634868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.634915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.634926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.639168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.639213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.639225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.643353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.643400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.643411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.647445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.647492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.647503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.651591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.651646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.651667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.656068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.656114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.656125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.660214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.660263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.214 [2024-07-26 10:23:03.660275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.214 [2024-07-26 10:23:03.664356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.214 [2024-07-26 10:23:03.664405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.215 [2024-07-26 10:23:03.664416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.473 [2024-07-26 10:23:03.668585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.473 [2024-07-26 10:23:03.668643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.473 [2024-07-26 10:23:03.668655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.473 [2024-07-26 10:23:03.672573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.473 [2024-07-26 10:23:03.672631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.473 [2024-07-26 10:23:03.672644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.473 [2024-07-26 10:23:03.676824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.473 [2024-07-26 10:23:03.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.473 [2024-07-26 10:23:03.676886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.473 [2024-07-26 10:23:03.680829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d23420) 00:17:50.473 [2024-07-26 10:23:03.680857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.473 [2024-07-26 10:23:03.680869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.473 00:17:50.473 Latency(us) 00:17:50.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.473 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:50.473 nvme0n1 : 2.00 7385.35 923.17 0.00 0.00 2163.29 1772.45 4944.99 00:17:50.473 =================================================================================================================== 00:17:50.473 Total : 7385.35 923.17 0.00 0.00 2163.29 1772.45 4944.99 00:17:50.473 0 00:17:50.473 10:23:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:50.473 10:23:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:50.473 10:23:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:50.473 10:23:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:50.473 | .driver_specific 00:17:50.473 | .nvme_error 00:17:50.473 | .status_code 00:17:50.473 | .command_transient_transport_error' 00:17:50.732 10:23:03 -- host/digest.sh@71 -- # (( 477 > 0 )) 00:17:50.732 10:23:03 -- host/digest.sh@73 -- # killprocess 83546 00:17:50.732 10:23:03 -- common/autotest_common.sh@926 -- # '[' -z 83546 ']' 00:17:50.732 10:23:03 -- common/autotest_common.sh@930 -- # kill -0 83546 00:17:50.732 10:23:03 -- common/autotest_common.sh@931 -- # uname 00:17:50.732 10:23:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.732 10:23:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83546 00:17:50.732 10:23:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:50.732 killing process with pid 83546 00:17:50.732 10:23:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:50.732 10:23:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83546' 00:17:50.732 10:23:03 -- common/autotest_common.sh@945 -- # kill 83546 00:17:50.732 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.732 00:17:50.732 Latency(us) 00:17:50.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.732 =================================================================================================================== 00:17:50.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.732 10:23:03 -- common/autotest_common.sh@950 -- # wait 83546 00:17:50.991 10:23:04 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:50.991 10:23:04 -- host/digest.sh@54 -- # local rw bs qd 00:17:50.991 10:23:04 -- host/digest.sh@56 -- # rw=randwrite 00:17:50.991 10:23:04 -- host/digest.sh@56 -- # bs=4096 00:17:50.991 10:23:04 -- host/digest.sh@56 -- # qd=128 00:17:50.991 10:23:04 -- host/digest.sh@58 -- # bperfpid=83608 00:17:50.991 10:23:04 -- host/digest.sh@60 -- # waitforlisten 83608 /var/tmp/bperf.sock 00:17:50.991 10:23:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:50.991 10:23:04 -- common/autotest_common.sh@819 -- # '[' -z 83608 ']' 00:17:50.991 10:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:50.991 10:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:50.991 10:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:50.991 10:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.991 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:17:50.991 [2024-07-26 10:23:04.254542] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:50.991 [2024-07-26 10:23:04.254660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83608 ] 00:17:50.991 [2024-07-26 10:23:04.391874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.250 [2024-07-26 10:23:04.478654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.818 10:23:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.818 10:23:05 -- common/autotest_common.sh@852 -- # return 0 00:17:51.818 10:23:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:51.818 10:23:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.076 10:23:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:52.076 10:23:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.076 10:23:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.076 10:23:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.076 10:23:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.076 10:23:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.335 nvme0n1 00:17:52.335 10:23:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:52.335 10:23:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.335 10:23:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.335 10:23:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.335 10:23:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:52.335 10:23:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.593 Running I/O for 2 seconds... 00:17:52.593 [2024-07-26 10:23:05.853876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ddc00 00:17:52.593 [2024-07-26 10:23:05.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.855308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.870378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fef90 00:17:52.593 [2024-07-26 10:23:05.871771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.871809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.886324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ff3c8 00:17:52.593 [2024-07-26 10:23:05.887815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.887852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.902995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190feb58 00:17:52.593 [2024-07-26 10:23:05.904440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.919816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fe720 00:17:52.593 [2024-07-26 10:23:05.921166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.921231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.935935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fe2e8 00:17:52.593 [2024-07-26 10:23:05.937296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.937343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.952117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fdeb0 00:17:52.593 [2024-07-26 10:23:05.953472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.953499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:52.593 [2024-07-26 10:23:05.968138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fda78 00:17:52.593 [2024-07-26 10:23:05.969563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.593 [2024-07-26 10:23:05.969649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:52.594 [2024-07-26 10:23:05.984365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fd640 00:17:52.594 [2024-07-26 10:23:05.985701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.594 [2024-07-26 10:23:05.985735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:52.594 [2024-07-26 10:23:05.999929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fd208 00:17:52.594 [2024-07-26 10:23:06.001250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.594 [2024-07-26 10:23:06.001295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:52.594 [2024-07-26 10:23:06.015825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fcdd0 00:17:52.594 [2024-07-26 10:23:06.017045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.594 [2024-07-26 10:23:06.017076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:52.594 [2024-07-26 10:23:06.031502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fc998 00:17:52.594 [2024-07-26 10:23:06.032741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.594 [2024-07-26 10:23:06.032772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:52.594 [2024-07-26 10:23:06.047329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fc560 00:17:52.594 [2024-07-26 10:23:06.048552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.594 [2024-07-26 10:23:06.048597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.063381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fc128 00:17:52.852 [2024-07-26 10:23:06.064666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.064694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.078705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fbcf0 00:17:52.852 [2024-07-26 10:23:06.079948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.079981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.094170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fb8b8 00:17:52.852 [2024-07-26 10:23:06.095386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.095430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.109336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fb480 00:17:52.852 [2024-07-26 10:23:06.110557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.110602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.125666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fb048 00:17:52.852 [2024-07-26 10:23:06.126945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.126976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.141683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fac10 00:17:52.852 [2024-07-26 10:23:06.142912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.157369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fa7d8 00:17:52.852 [2024-07-26 10:23:06.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.158652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.172692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190fa3a0 00:17:52.852 [2024-07-26 10:23:06.173846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.173907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:52.852 [2024-07-26 10:23:06.187747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f9f68 00:17:52.852 [2024-07-26 10:23:06.188939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.852 [2024-07-26 10:23:06.188972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.202732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f9b30 00:17:52.853 [2024-07-26 10:23:06.203907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.203940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.217692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f96f8 00:17:52.853 [2024-07-26 10:23:06.218854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.218899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.232923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f92c0 00:17:52.853 [2024-07-26 10:23:06.234049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.234094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.247908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f8e88 00:17:52.853 [2024-07-26 10:23:06.249096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.249140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.262996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f8a50 00:17:52.853 [2024-07-26 10:23:06.264153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.264183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.278213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f8618 00:17:52.853 [2024-07-26 10:23:06.279336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.279368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:52.853 [2024-07-26 10:23:06.293139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f81e0 00:17:52.853 [2024-07-26 10:23:06.294232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.853 [2024-07-26 10:23:06.294264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.308122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f7da8 00:17:53.111 [2024-07-26 10:23:06.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.309234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.323260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f7970 00:17:53.111 [2024-07-26 10:23:06.324376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.324405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.338496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f7538 00:17:53.111 [2024-07-26 10:23:06.339609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.339684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.354735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f7100 00:17:53.111 [2024-07-26 10:23:06.355816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.355848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.370111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f6cc8 00:17:53.111 [2024-07-26 10:23:06.371172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.371217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.385776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f6890 00:17:53.111 [2024-07-26 10:23:06.386869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.386903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.400576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f6458 00:17:53.111 [2024-07-26 10:23:06.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.401612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.414965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f6020 00:17:53.111 [2024-07-26 10:23:06.415976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.111 [2024-07-26 10:23:06.416024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:53.111 [2024-07-26 10:23:06.429555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f5be8 00:17:53.111 [2024-07-26 10:23:06.430530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.430559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.444123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f57b0 00:17:53.112 [2024-07-26 10:23:06.445169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.445211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.458734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f5378 00:17:53.112 [2024-07-26 10:23:06.459790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.459821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.473695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f4f40 00:17:53.112 [2024-07-26 10:23:06.474623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.474677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.488080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f4b08 00:17:53.112 [2024-07-26 10:23:06.489070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.502763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f46d0 00:17:53.112 [2024-07-26 10:23:06.503714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.503744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.517507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f4298 00:17:53.112 [2024-07-26 10:23:06.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.531943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f3e60 00:17:53.112 [2024-07-26 10:23:06.532861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.532890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.546582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f3a28 00:17:53.112 [2024-07-26 10:23:06.547513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.547542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:53.112 [2024-07-26 10:23:06.561173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f35f0 00:17:53.112 [2024-07-26 10:23:06.562081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.112 [2024-07-26 10:23:06.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.575506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f31b8 00:17:53.371 [2024-07-26 10:23:06.576445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.576471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.590087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f2d80 00:17:53.371 [2024-07-26 10:23:06.590983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.591030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.605647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f2948 00:17:53.371 [2024-07-26 10:23:06.606486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.606512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.621801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f2510 00:17:53.371 [2024-07-26 10:23:06.622665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.622693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.637939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f20d8 00:17:53.371 [2024-07-26 10:23:06.638799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.638829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.652733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f1ca0 00:17:53.371 [2024-07-26 10:23:06.653595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.653629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.667552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f1868 00:17:53.371 [2024-07-26 10:23:06.668480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.668510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.682175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f1430 00:17:53.371 [2024-07-26 10:23:06.682964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.696690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f0ff8 00:17:53.371 [2024-07-26 10:23:06.697475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.697500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.711165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f0bc0 00:17:53.371 [2024-07-26 10:23:06.711968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.712014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.725599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f0788 00:17:53.371 [2024-07-26 10:23:06.726357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.740248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190f0350 00:17:53.371 [2024-07-26 10:23:06.741028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.741084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.754718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eff18 00:17:53.371 [2024-07-26 10:23:06.755440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.755465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.769015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190efae0 00:17:53.371 [2024-07-26 10:23:06.769789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.371 [2024-07-26 10:23:06.769815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:53.371 [2024-07-26 10:23:06.783467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ef6a8 00:17:53.372 [2024-07-26 10:23:06.784249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.372 [2024-07-26 10:23:06.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:53.372 [2024-07-26 10:23:06.798157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ef270 00:17:53.372 [2024-07-26 10:23:06.798911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.372 [2024-07-26 10:23:06.798941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:53.372 [2024-07-26 10:23:06.812685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eee38 00:17:53.372 [2024-07-26 10:23:06.813375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.372 [2024-07-26 10:23:06.813402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.827257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eea00 00:17:53.631 [2024-07-26 10:23:06.828014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.828039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.841748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ee5c8 00:17:53.631 [2024-07-26 10:23:06.842416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.842440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.856241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ee190 00:17:53.631 [2024-07-26 10:23:06.856933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.871034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190edd58 00:17:53.631 [2024-07-26 10:23:06.871762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.871787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.886669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ed920 00:17:53.631 [2024-07-26 10:23:06.887363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.887392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.902161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ed4e8 00:17:53.631 [2024-07-26 10:23:06.902825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.902853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.918082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ed0b0 00:17:53.631 [2024-07-26 10:23:06.918751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.918778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.932663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ecc78 00:17:53.631 [2024-07-26 10:23:06.933315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.933360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.946923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ec840 00:17:53.631 [2024-07-26 10:23:06.947564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.947627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.961104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ec408 00:17:53.631 [2024-07-26 10:23:06.961727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.961772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.975791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ebfd0 00:17:53.631 [2024-07-26 10:23:06.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.976494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:06.990651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ebb98 00:17:53.631 [2024-07-26 10:23:06.991254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:06.991284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:07.005153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eb760 00:17:53.631 [2024-07-26 10:23:07.005782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:07.005807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:07.019945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eb328 00:17:53.631 [2024-07-26 10:23:07.020537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:07.020563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:07.034996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eaef0 00:17:53.631 [2024-07-26 10:23:07.035538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:07.035581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:07.051036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190eaab8 00:17:53.631 [2024-07-26 10:23:07.051566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:07.051599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:53.631 [2024-07-26 10:23:07.066899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ea680 00:17:53.631 [2024-07-26 10:23:07.067418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.631 [2024-07-26 10:23:07.067443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:53.632 [2024-07-26 10:23:07.082994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190ea248 00:17:53.632 [2024-07-26 10:23:07.083526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.632 [2024-07-26 10:23:07.083552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.098643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e9e10 00:17:53.892 [2024-07-26 10:23:07.099158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.099184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.114551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e99d8 00:17:53.892 [2024-07-26 10:23:07.115069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.115095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.130009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e95a0 00:17:53.892 [2024-07-26 10:23:07.130506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.130532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.145296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e9168 00:17:53.892 [2024-07-26 10:23:07.145795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.145822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.160726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e8d30 00:17:53.892 [2024-07-26 10:23:07.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.176061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e88f8 00:17:53.892 [2024-07-26 10:23:07.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.176552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.191301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e84c0 00:17:53.892 [2024-07-26 10:23:07.191795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.191821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.206764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e8088 00:17:53.892 [2024-07-26 10:23:07.207217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.207243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.221699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e7c50 00:17:53.892 [2024-07-26 10:23:07.222118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.222143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.236713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e7818 00:17:53.892 [2024-07-26 10:23:07.237172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.237198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.251979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e73e0 00:17:53.892 [2024-07-26 10:23:07.252428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.252457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.267052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e6fa8 00:17:53.892 [2024-07-26 10:23:07.267437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.267462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.282041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e6b70 00:17:53.892 [2024-07-26 10:23:07.282419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.282444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.297021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e6738 00:17:53.892 [2024-07-26 10:23:07.297386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.297410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.311950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e6300 00:17:53.892 [2024-07-26 10:23:07.312327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.312352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.327099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e5ec8 00:17:53.892 [2024-07-26 10:23:07.327450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.327475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:53.892 [2024-07-26 10:23:07.342121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e5a90 00:17:53.892 [2024-07-26 10:23:07.342458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.892 [2024-07-26 10:23:07.342483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.357412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e5658 00:17:54.152 [2024-07-26 10:23:07.357776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.357802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.372822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e5220 00:17:54.152 [2024-07-26 10:23:07.373202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.373229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.387435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e4de8 00:17:54.152 [2024-07-26 10:23:07.387805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.402062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e49b0 00:17:54.152 [2024-07-26 10:23:07.402355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.402380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.416612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e4578 00:17:54.152 [2024-07-26 10:23:07.416922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.416947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.431460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e4140 00:17:54.152 [2024-07-26 10:23:07.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.431822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.446141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e3d08 00:17:54.152 [2024-07-26 10:23:07.446427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.446452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.461228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e38d0 00:17:54.152 [2024-07-26 10:23:07.461533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.461555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.476249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e3498 00:17:54.152 [2024-07-26 10:23:07.476496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.476566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.490487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e3060 00:17:54.152 [2024-07-26 10:23:07.490744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.490768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.504598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e2c28 00:17:54.152 [2024-07-26 10:23:07.504825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.504848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.518041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e27f0 00:17:54.152 [2024-07-26 10:23:07.518255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.518289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.531563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e23b8 00:17:54.152 [2024-07-26 10:23:07.531806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.531825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.544909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e1f80 00:17:54.152 [2024-07-26 10:23:07.545105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.545124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.558543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e1b48 00:17:54.152 [2024-07-26 10:23:07.558745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.558766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.571939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e1710 00:17:54.152 [2024-07-26 10:23:07.572134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.572152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.585484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e12d8 00:17:54.152 [2024-07-26 10:23:07.585691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.585710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:54.152 [2024-07-26 10:23:07.599512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e0ea0 00:17:54.152 [2024-07-26 10:23:07.599728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.152 [2024-07-26 10:23:07.599749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:54.411 [2024-07-26 10:23:07.613764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e0a68 00:17:54.411 [2024-07-26 10:23:07.613918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.411 [2024-07-26 10:23:07.613938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:54.411 [2024-07-26 10:23:07.629110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e0630 00:17:54.411 [2024-07-26 10:23:07.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.411 [2024-07-26 10:23:07.629298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.644820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190e01f8 00:17:54.412 [2024-07-26 10:23:07.644958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.645009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.660488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190dfdc0 00:17:54.412 [2024-07-26 10:23:07.660658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.660678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.675558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190df988 00:17:54.412 [2024-07-26 10:23:07.675718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.675740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.690072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190df550 00:17:54.412 [2024-07-26 10:23:07.690181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.690216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.704523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190df118 00:17:54.412 [2024-07-26 10:23:07.704656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.704684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.718812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190dece0 00:17:54.412 [2024-07-26 10:23:07.718905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.718924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.733982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190de8a8 00:17:54.412 [2024-07-26 10:23:07.734078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.734098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.749385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190de038 00:17:54.412 [2024-07-26 10:23:07.749463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.749483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.772553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190de038 00:17:54.412 [2024-07-26 10:23:07.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.774060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.789106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190de470 00:17:54.412 [2024-07-26 10:23:07.790552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.806522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190de8a8 00:17:54.412 [2024-07-26 10:23:07.807992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.808025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:54.412 [2024-07-26 10:23:07.823294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9c90) with pdu=0x2000190dece0 00:17:54.412 [2024-07-26 10:23:07.824661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.412 [2024-07-26 10:23:07.824714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:54.412 00:17:54.412 Latency(us) 00:17:54.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.412 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.412 nvme0n1 : 2.01 16763.42 65.48 0.00 0.00 7629.62 6523.81 23235.49 00:17:54.412 =================================================================================================================== 00:17:54.412 Total : 16763.42 65.48 0.00 0.00 7629.62 6523.81 23235.49 00:17:54.412 0 00:17:54.412 10:23:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:54.412 10:23:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:54.412 10:23:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:54.412 10:23:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:54.412 | .driver_specific 00:17:54.412 | .nvme_error 00:17:54.412 | .status_code 00:17:54.412 | .command_transient_transport_error' 00:17:54.671 10:23:08 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:54.671 10:23:08 -- host/digest.sh@73 -- # killprocess 83608 00:17:54.671 10:23:08 -- common/autotest_common.sh@926 -- # '[' -z 83608 ']' 00:17:54.671 10:23:08 -- common/autotest_common.sh@930 -- # kill -0 83608 00:17:54.671 10:23:08 -- common/autotest_common.sh@931 -- # uname 00:17:54.671 10:23:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.671 10:23:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83608 00:17:54.930 10:23:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.930 killing process with pid 83608 00:17:54.930 Received shutdown signal, test time was about 2.000000 seconds 00:17:54.930 00:17:54.930 Latency(us) 00:17:54.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.930 =================================================================================================================== 00:17:54.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.930 10:23:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.930 10:23:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83608' 00:17:54.930 10:23:08 -- common/autotest_common.sh@945 -- # kill 83608 00:17:54.930 10:23:08 -- common/autotest_common.sh@950 -- # wait 83608 00:17:54.930 10:23:08 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:54.930 10:23:08 -- host/digest.sh@54 -- # local rw bs qd 00:17:54.930 10:23:08 -- host/digest.sh@56 -- # rw=randwrite 00:17:54.930 10:23:08 -- host/digest.sh@56 -- # bs=131072 00:17:54.930 10:23:08 -- host/digest.sh@56 -- # qd=16 00:17:54.930 10:23:08 -- host/digest.sh@58 -- # bperfpid=83667 00:17:54.930 10:23:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:54.930 10:23:08 -- host/digest.sh@60 -- # waitforlisten 83667 /var/tmp/bperf.sock 00:17:54.930 10:23:08 -- common/autotest_common.sh@819 -- # '[' -z 83667 ']' 00:17:54.930 10:23:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.930 10:23:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.930 10:23:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.930 10:23:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.930 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:17:55.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.190 Zero copy mechanism will not be used. 00:17:55.190 [2024-07-26 10:23:08.389256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:55.190 [2024-07-26 10:23:08.389354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83667 ] 00:17:55.190 [2024-07-26 10:23:08.524692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.190 [2024-07-26 10:23:08.602480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.125 10:23:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.125 10:23:09 -- common/autotest_common.sh@852 -- # return 0 00:17:56.125 10:23:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.125 10:23:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.125 10:23:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:56.125 10:23:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.125 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:17:56.125 10:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.125 10:23:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.125 10:23:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.693 nvme0n1 00:17:56.693 10:23:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:56.693 10:23:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.693 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:17:56.693 10:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.693 10:23:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:56.693 10:23:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:56.693 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:56.693 Zero copy mechanism will not be used. 00:17:56.693 Running I/O for 2 seconds... 00:17:56.693 [2024-07-26 10:23:10.000110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.000436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.000464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.005173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.005503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.005527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.010205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.010505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.010534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.015038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.015335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.015363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.019959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.020302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.020330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.025031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.025352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.025379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.030077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.030376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.030403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.035112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.035433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.035477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.040234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.040546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.040583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.045337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.045653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.045690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.050249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.050540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.050566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.055177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.055474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.055502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.060141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.060414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.060474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.065015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.065320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.065347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.069939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.070227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.070253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.074838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.075147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.075173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.079925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.080260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.080287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.085077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.085405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.085431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.090072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.090361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.090387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.094937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.095224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.099876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.100241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.100268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.105219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.105514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.105540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.110334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.110645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.693 [2024-07-26 10:23:10.115526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.693 [2024-07-26 10:23:10.115873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.693 [2024-07-26 10:23:10.115903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.120844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.121150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.121178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.126124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.126460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.126488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.131466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.131845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.131873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.136916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.137275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.137302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.142332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.142644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.142679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.694 [2024-07-26 10:23:10.147578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.694 [2024-07-26 10:23:10.147954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.694 [2024-07-26 10:23:10.147987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.152907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.153254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.153280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.157853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.158181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.158208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.162706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.163035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.163062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.167648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.168062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.172523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.172861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.172894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.177416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.177757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.177802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.182465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.182788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.182814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.187500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.187879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.187907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.192525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.192903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.197481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.197841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.197873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.202366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.202684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.202711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.207441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.207787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.207814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.953 [2024-07-26 10:23:10.212374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.953 [2024-07-26 10:23:10.212694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.953 [2024-07-26 10:23:10.212721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.217312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.217630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.217656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.222116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.222421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.222448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.227043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.227332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.231868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.232181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.232206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.236939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.237230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.237257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.241734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.242022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.242049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.246511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.246878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.246911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.251590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.251942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.251970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.256525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.256905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.256939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.261506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.261832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.261859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.266464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.266862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.271281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.271556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.276130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.276417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.276444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.280992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.281283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.281309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.285727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.286014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.286041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.290879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.291193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.291220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.295787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.296134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.296176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.300823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.301133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.301175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.305592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.305872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.305898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.310352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.310678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.310705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.315107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.315410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.319849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.320170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.320195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.324637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.324939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.329449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.329783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.329813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.334319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.334632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.339092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.954 [2024-07-26 10:23:10.339373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.954 [2024-07-26 10:23:10.339399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.954 [2024-07-26 10:23:10.343725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.344038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.344063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.348556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.348915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.348943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.353256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.353536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.353562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.357987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.358253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.358277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.362716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.362998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.363024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.367424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.367792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.372194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.372504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.376969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.377271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.377297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.381676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.386393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.386707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.386733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.391061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.391338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.391364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.395755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.396095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.396121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.400547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.400925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.400973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.955 [2024-07-26 10:23:10.405300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:56.955 [2024-07-26 10:23:10.405582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.955 [2024-07-26 10:23:10.405615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.214 [2024-07-26 10:23:10.410031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.214 [2024-07-26 10:23:10.410308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.214 [2024-07-26 10:23:10.410334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.214 [2024-07-26 10:23:10.414719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.414998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.415023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.419368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.419702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.419729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.424143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.424421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.424446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.428849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.429136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.429176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.433553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.433892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.433922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.438461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.438797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.438828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.443437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.443841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.448410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.448740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.448790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.453131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.453430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.453456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.458140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.458429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.458456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.463009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.463291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.463317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.467859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.468177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.468202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.472724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.473008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.473033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.477565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.477904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.477931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.482375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.482685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.482711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.487050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.487327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.487353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.491729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.492079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.492105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.496550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.496886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.496911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.501287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.501567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.501600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.505972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.506271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.506296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.510739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.511018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.511043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.515304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.515598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.515634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.520087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.520368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.520394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.524780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.525061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.525086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.529700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.530002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.530023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.534685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.535017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.535044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.539971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.215 [2024-07-26 10:23:10.540269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.215 [2024-07-26 10:23:10.540297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.215 [2024-07-26 10:23:10.545117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.545422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.550199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.550485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.550512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.555334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.555647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.555695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.560389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.560729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.560760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.565428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.565766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.565793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.570516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.570868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.570899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.575416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.575774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.575807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.580363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.580703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.580734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.585377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.585707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.585734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.590216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.590501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.590527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.594980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.595263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.595289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.600032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.600321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.600347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.604875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.605166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.605192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.609687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.610001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.610027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.614630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.614952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.614991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.619451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.619804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.619831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.624269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.624555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.624589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.629247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.629543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.629580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.634066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.634353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.634378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.638806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.639097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.639123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.643757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.644068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.648623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.648938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.653400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.653737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.653769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.658353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.658669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.658694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.663218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.663507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.663536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.216 [2024-07-26 10:23:10.668021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.216 [2024-07-26 10:23:10.668295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.216 [2024-07-26 10:23:10.668321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.673032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.673355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.678278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.678566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.678618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.683544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.683876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.688629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.688926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.688965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.693763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.694115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.476 [2024-07-26 10:23:10.698892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.476 [2024-07-26 10:23:10.699247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.476 [2024-07-26 10:23:10.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.704051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.704326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.709136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.709443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.709469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.714081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.714362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.714388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.718915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.719228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.719254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.723565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.723924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.723952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.728309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.728591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.728625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.733048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.733329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.733354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.737790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.738089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.738115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.742470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.742807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.742839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.747283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.747565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.747598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.752041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.752309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.752334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.756668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.756946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.756972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.761425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.761778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.761809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.766086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.766364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.766389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.770771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.771054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.771079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.775377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.775711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.775739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.780115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.780410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.784745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.785027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.785052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.789412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.789780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.789812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.794188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.794476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.794502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.798931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.799217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.799242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.803632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.803958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.804001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.808489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.808850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.477 [2024-07-26 10:23:10.808882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.477 [2024-07-26 10:23:10.813298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.477 [2024-07-26 10:23:10.813594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.813628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.817994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.818273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.822684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.822988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.827239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.827520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.827545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.831972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.832270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.832296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.836598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.836940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.841374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.841701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.841728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.846146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.846425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.846451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.850861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.851163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.851188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.855506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.855869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.855905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.860295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.860575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.860608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.865005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.865330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.869664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.869951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.869992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.874302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.874581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.874617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.878968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.879248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.879274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.883626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.883930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.888316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.888598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.888633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.892979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.893261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.897679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.897965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.898005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.902326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.902625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.902650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.907051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.907339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.907365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.911738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.912024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.912049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.916542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.916895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.916942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.921386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.921722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.921753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.478 [2024-07-26 10:23:10.926164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.478 [2024-07-26 10:23:10.926449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.478 [2024-07-26 10:23:10.926475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.738 [2024-07-26 10:23:10.930893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.738 [2024-07-26 10:23:10.931251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.738 [2024-07-26 10:23:10.931295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.738 [2024-07-26 10:23:10.936179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.738 [2024-07-26 10:23:10.936476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.738 [2024-07-26 10:23:10.936503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.738 [2024-07-26 10:23:10.941296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.738 [2024-07-26 10:23:10.941591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.738 [2024-07-26 10:23:10.941627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.738 [2024-07-26 10:23:10.946327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.738 [2024-07-26 10:23:10.946630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.738 [2024-07-26 10:23:10.946666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.738 [2024-07-26 10:23:10.951320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.738 [2024-07-26 10:23:10.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.738 [2024-07-26 10:23:10.951645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.956141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.956431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.956456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.960788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.961054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.961079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.965435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.965745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.965771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.970189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.970471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.970498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.974892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.975188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.975213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.979629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.979940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.979966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.984362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.984691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.984717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.989196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.989478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.989504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.993917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.994184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.994209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:10.998552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:10.998927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:10.998966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.003388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.003717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.003739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.008177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.008457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.008482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.013034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.013337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.017750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.018034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.018059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.022408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.022736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.022763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.027089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.027368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.027394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.031857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.032168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.032194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.036667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.036955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.036981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.041351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.041661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.041686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.046013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.046302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.050708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.051008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.051034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.055298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.055576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.055611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.060107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.060399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.064878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.065158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.069564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.069855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.069880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.739 [2024-07-26 10:23:11.074305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.739 [2024-07-26 10:23:11.074615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.739 [2024-07-26 10:23:11.074639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.079012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.079276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.083714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.084032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.084058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.088467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.088807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.088838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.093330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.093628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.093653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.098220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.102939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.103219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.103244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.107631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.107943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.107969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.112363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.112675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.112697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.117091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.117375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.117400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.121796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.122077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.122102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.127022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.127328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.127353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.132155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.132458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.132483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.137277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.137589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.137625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.142561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.142930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.142968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.147748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.148051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.148079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.152782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.153097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.153124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.157910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.158258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.158283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.163107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.163465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.168262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.168543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.168568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.173343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.173641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.173692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.178413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.178764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.183572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.183924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.183951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.740 [2024-07-26 10:23:11.188590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:57.740 [2024-07-26 10:23:11.188967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.740 [2024-07-26 10:23:11.189000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.193815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.194195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.198954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.199329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.199355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.204241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.204521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.204546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.209372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.209679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.214558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.214946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.214984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.219894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.220216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.220257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.225106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.225455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.225482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.230470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.230843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.235511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.235893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.235921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.240815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.241165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.245792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.246065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.246090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.250482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.250840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.250872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.255232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.255512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.000 [2024-07-26 10:23:11.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.000 [2024-07-26 10:23:11.260154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.000 [2024-07-26 10:23:11.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.260459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.265059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.265341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.265367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.269967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.270236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.274964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.275281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.275307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.279863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.280164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.280190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.284691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.284992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.285018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.289339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.289650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.289675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.294380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.294684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.299353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.299709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.299737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.304372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.304710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.309331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.309636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.309676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.314538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.314895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.314923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.319602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.319973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.320001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.324646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.324982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.325020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.329695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.330036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.330064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.334925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.335273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.335298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.340060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.340361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.340387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.345031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.345322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.345348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.349879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.350150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.350176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.354726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.355038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.359773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.360087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.360114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.364642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.364986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.365024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.369501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.369834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.369865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.374529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.374837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.374865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.379291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.379580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.379617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.384148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.384435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.001 [2024-07-26 10:23:11.384461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.001 [2024-07-26 10:23:11.388938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.001 [2024-07-26 10:23:11.389241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.389267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.393810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.394093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.394119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.398645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.398952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.398978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.403515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.403913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.408397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.408750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.408782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.413400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.413747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.413779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.418351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.418678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.418705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.423258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.423556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.423591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.428091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.428381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.428408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.432849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.433200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.433241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.437758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.438040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.438065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.442595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.442895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.447372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.447743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.447779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.002 [2024-07-26 10:23:11.452518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.002 [2024-07-26 10:23:11.452910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.002 [2024-07-26 10:23:11.452942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.457861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.458177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.458218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.462982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.463297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.463323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.468141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.468444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.468470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.473247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.473566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.473626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.478132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.478435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.478505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.482972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.483260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.483286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.487777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.488107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.492633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.492939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.492976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.497362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.497689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.497715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.502159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.502447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.502473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.506940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.507227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.511633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.511946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.511973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.516441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.516808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.516839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.521290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.521576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.521627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.269 [2024-07-26 10:23:11.526058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.269 [2024-07-26 10:23:11.526344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.269 [2024-07-26 10:23:11.526370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.530795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.531080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.531106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.535706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.536058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.540525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.540891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.540922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.545334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.545637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.545662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.550124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.550414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.550440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.554889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.555191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.555217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.559622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.559951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.564445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.564767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.564793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.569231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.569518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.569545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.574200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.574490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.574516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.578968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.579281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.579308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.583709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.584036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.584062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.588488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.588808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.588835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.593365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.593673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.593700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.598312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.598649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.598674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.603230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.603526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.603552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.608180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.608494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.608536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.613274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.613573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.613610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.618249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.618544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.618610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.623161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.623498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.628133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.628420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.628446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.632936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.633221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.633247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.637736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.638025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.638051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.642495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.642856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.647203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.647490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.270 [2024-07-26 10:23:11.647516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.270 [2024-07-26 10:23:11.652023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.270 [2024-07-26 10:23:11.652302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.652328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.656999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.657308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.657334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.661940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.662246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.662273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.666830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.667139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.667165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.671765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.672109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.676708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.676995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.681457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.681811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.686439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.686802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.691302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.691591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.691628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.696422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.696804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.701457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.701809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.706560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.706900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.706938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.711649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.711960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.712003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.271 [2024-07-26 10:23:11.716729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.271 [2024-07-26 10:23:11.717037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.271 [2024-07-26 10:23:11.717075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.721759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.722083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.722110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.726834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.727144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.727171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.731918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.732218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.732244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.737128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.737446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.742122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.742466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.747487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.747874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.747901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.752525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.752907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.752954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.757531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.757914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.757945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.762486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.762825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.762857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.767404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.767754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.767817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.772344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.772671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.772707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.777169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.777478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.777505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.781996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.782291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.782316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.786802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.787136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.787162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.791708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.792023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.792048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.796719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.797023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.797049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.801660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.801949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.801974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.806631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.806960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.806997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.811767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.812092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.812118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.816748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.560 [2024-07-26 10:23:11.817046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.560 [2024-07-26 10:23:11.817073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.560 [2024-07-26 10:23:11.821769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.822075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.822102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.826699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.826994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.827020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.831732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.832070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.832096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.836825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.837166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.841946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.842271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.842298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.847128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.847448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.847476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.852325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.852625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.852668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.857451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.857837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.857864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.862638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.862932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.862958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.867676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.867997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.868022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.872791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.873097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.873122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.877899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.878222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.878249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.883026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.883336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.883375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.888101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.888446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.888474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.893376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.893703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.893729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.898166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.898469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.898495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.902982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.903290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.903317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.907899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.908236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.908262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.912913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.913226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.913252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.918003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.918344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.918372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.923013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.923327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.923355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.928094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.928414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.928440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.933111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.933415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.933442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.938116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.938427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.938453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.943089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.943416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.561 [2024-07-26 10:23:11.943443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.561 [2024-07-26 10:23:11.948177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.561 [2024-07-26 10:23:11.948486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.948527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.953248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.953529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.953569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.958156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.958444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.958470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.962967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.963253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.963279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.967803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.968115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.968140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.973135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.973500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.978490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.978837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.978865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.983526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.983931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.983963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.988543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.988904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.988931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.562 [2024-07-26 10:23:11.993179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8e9e30) with pdu=0x2000190fef90 00:17:58.562 [2024-07-26 10:23:11.993346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.562 [2024-07-26 10:23:11.993367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.562 00:17:58.562 Latency(us) 00:17:58.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.562 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:58.562 nvme0n1 : 2.00 6305.17 788.15 0.00 0.00 2532.12 1966.08 5838.66 00:17:58.562 =================================================================================================================== 00:17:58.562 Total : 6305.17 788.15 0.00 0.00 2532.12 1966.08 5838.66 00:17:58.562 0 00:17:58.821 10:23:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:58.821 10:23:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:58.821 10:23:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:58.821 | .driver_specific 00:17:58.821 | .nvme_error 00:17:58.821 | .status_code 00:17:58.821 | .command_transient_transport_error' 00:17:58.821 10:23:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:58.821 10:23:12 -- host/digest.sh@71 -- # (( 407 > 0 )) 00:17:58.821 10:23:12 -- host/digest.sh@73 -- # killprocess 83667 00:17:58.821 10:23:12 -- common/autotest_common.sh@926 -- # '[' -z 83667 ']' 00:17:58.821 10:23:12 -- common/autotest_common.sh@930 -- # kill -0 83667 00:17:58.821 10:23:12 -- common/autotest_common.sh@931 -- # uname 00:17:58.821 10:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.821 10:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83667 00:17:59.079 killing process with pid 83667 00:17:59.079 Received shutdown signal, test time was about 2.000000 seconds 00:17:59.079 00:17:59.079 Latency(us) 00:17:59.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.079 =================================================================================================================== 00:17:59.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.079 10:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:59.079 10:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:59.080 10:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83667' 00:17:59.080 10:23:12 -- common/autotest_common.sh@945 -- # kill 83667 00:17:59.080 10:23:12 -- common/autotest_common.sh@950 -- # wait 83667 00:17:59.080 10:23:12 -- host/digest.sh@115 -- # killprocess 83459 00:17:59.080 10:23:12 -- common/autotest_common.sh@926 -- # '[' -z 83459 ']' 00:17:59.080 10:23:12 -- common/autotest_common.sh@930 -- # kill -0 83459 00:17:59.080 10:23:12 -- common/autotest_common.sh@931 -- # uname 00:17:59.080 10:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.080 10:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83459 00:17:59.080 killing process with pid 83459 00:17:59.080 10:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.080 10:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.080 10:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83459' 00:17:59.080 10:23:12 -- common/autotest_common.sh@945 -- # kill 83459 00:17:59.080 10:23:12 -- common/autotest_common.sh@950 -- # wait 83459 00:17:59.338 00:17:59.338 real 0m18.109s 00:17:59.338 user 0m34.641s 00:17:59.338 sys 0m4.936s 00:17:59.338 10:23:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.338 ************************************ 00:17:59.338 END TEST nvmf_digest_error 00:17:59.338 ************************************ 00:17:59.338 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:17:59.338 10:23:12 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:59.338 10:23:12 -- host/digest.sh@139 -- # nvmftestfini 00:17:59.338 10:23:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.338 10:23:12 -- nvmf/common.sh@116 -- # sync 00:17:59.597 10:23:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.597 10:23:12 -- nvmf/common.sh@119 -- # set +e 00:17:59.597 10:23:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.597 10:23:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.597 rmmod nvme_tcp 00:17:59.597 rmmod nvme_fabrics 00:17:59.597 rmmod nvme_keyring 00:17:59.597 10:23:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.597 10:23:12 -- nvmf/common.sh@123 -- # set -e 00:17:59.597 10:23:12 -- nvmf/common.sh@124 -- # return 0 00:17:59.597 10:23:12 -- nvmf/common.sh@477 -- # '[' -n 83459 ']' 00:17:59.597 10:23:12 -- nvmf/common.sh@478 -- # killprocess 83459 00:17:59.597 10:23:12 -- common/autotest_common.sh@926 -- # '[' -z 83459 ']' 00:17:59.597 10:23:12 -- common/autotest_common.sh@930 -- # kill -0 83459 00:17:59.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (83459) - No such process 00:17:59.597 Process with pid 83459 is not found 00:17:59.597 10:23:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 83459 is not found' 00:17:59.597 10:23:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.597 10:23:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.597 10:23:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.597 10:23:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.597 10:23:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.597 10:23:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.597 10:23:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.597 10:23:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.597 10:23:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:59.597 00:17:59.597 real 0m37.129s 00:17:59.597 user 1m9.947s 00:17:59.597 sys 0m10.039s 00:17:59.597 10:23:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.597 ************************************ 00:17:59.597 END TEST nvmf_digest 00:17:59.597 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:17:59.597 ************************************ 00:17:59.597 10:23:12 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:59.597 10:23:12 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:59.597 10:23:12 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:59.597 10:23:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:59.597 10:23:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:59.597 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:17:59.597 ************************************ 00:17:59.597 START TEST nvmf_multipath 00:17:59.597 ************************************ 00:17:59.597 10:23:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:59.597 * Looking for test storage... 00:17:59.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:59.597 10:23:13 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.597 10:23:13 -- nvmf/common.sh@7 -- # uname -s 00:17:59.597 10:23:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.597 10:23:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.597 10:23:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.597 10:23:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.597 10:23:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.597 10:23:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.597 10:23:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.597 10:23:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.597 10:23:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.597 10:23:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.597 10:23:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:17:59.597 10:23:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:17:59.597 10:23:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.597 10:23:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.597 10:23:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.597 10:23:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.597 10:23:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.597 10:23:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.597 10:23:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.597 10:23:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.597 10:23:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.597 10:23:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.597 10:23:13 -- paths/export.sh@5 -- # export PATH 00:17:59.597 10:23:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.597 10:23:13 -- nvmf/common.sh@46 -- # : 0 00:17:59.597 10:23:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:59.597 10:23:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:59.597 10:23:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:59.597 10:23:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.597 10:23:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.597 10:23:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:59.597 10:23:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:59.597 10:23:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:59.597 10:23:13 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.597 10:23:13 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.597 10:23:13 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.597 10:23:13 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:59.597 10:23:13 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.597 10:23:13 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:59.597 10:23:13 -- host/multipath.sh@30 -- # nvmftestinit 00:17:59.597 10:23:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:59.597 10:23:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.597 10:23:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:59.597 10:23:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:59.597 10:23:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:59.597 10:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.597 10:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.597 10:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.856 10:23:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:59.856 10:23:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:59.856 10:23:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:59.856 10:23:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:59.856 10:23:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:59.856 10:23:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:59.856 10:23:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.856 10:23:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.856 10:23:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:59.856 10:23:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:59.856 10:23:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.856 10:23:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.856 10:23:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.856 10:23:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.856 10:23:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.856 10:23:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.856 10:23:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.856 10:23:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.856 10:23:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:59.856 10:23:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:59.856 Cannot find device "nvmf_tgt_br" 00:17:59.856 10:23:13 -- nvmf/common.sh@154 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.856 Cannot find device "nvmf_tgt_br2" 00:17:59.856 10:23:13 -- nvmf/common.sh@155 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:59.856 10:23:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:59.856 Cannot find device "nvmf_tgt_br" 00:17:59.856 10:23:13 -- nvmf/common.sh@157 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:59.856 Cannot find device "nvmf_tgt_br2" 00:17:59.856 10:23:13 -- nvmf/common.sh@158 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:59.856 10:23:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:59.856 10:23:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.856 10:23:13 -- nvmf/common.sh@161 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.856 10:23:13 -- nvmf/common.sh@162 -- # true 00:17:59.856 10:23:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.856 10:23:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.856 10:23:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.856 10:23:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.856 10:23:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.856 10:23:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.856 10:23:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.857 10:23:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.857 10:23:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.857 10:23:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:59.857 10:23:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:59.857 10:23:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:59.857 10:23:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:59.857 10:23:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.857 10:23:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.857 10:23:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.857 10:23:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:59.857 10:23:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:59.857 10:23:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.115 10:23:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.115 10:23:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.115 10:23:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.115 10:23:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.115 10:23:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:00.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:00.115 00:18:00.115 --- 10.0.0.2 ping statistics --- 00:18:00.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.115 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:00.115 10:23:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:00.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:18:00.115 00:18:00.115 --- 10.0.0.3 ping statistics --- 00:18:00.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.115 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:00.115 10:23:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:00.115 00:18:00.115 --- 10.0.0.1 ping statistics --- 00:18:00.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.115 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:00.115 10:23:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.115 10:23:13 -- nvmf/common.sh@421 -- # return 0 00:18:00.115 10:23:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:00.115 10:23:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.115 10:23:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:00.115 10:23:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:00.115 10:23:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.115 10:23:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:00.115 10:23:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:00.115 10:23:13 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:00.115 10:23:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:00.115 10:23:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:00.115 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:18:00.115 10:23:13 -- nvmf/common.sh@469 -- # nvmfpid=83934 00:18:00.115 10:23:13 -- nvmf/common.sh@470 -- # waitforlisten 83934 00:18:00.115 10:23:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:00.115 10:23:13 -- common/autotest_common.sh@819 -- # '[' -z 83934 ']' 00:18:00.115 10:23:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.115 10:23:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.115 10:23:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.115 10:23:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.115 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:18:00.115 [2024-07-26 10:23:13.437265] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:00.115 [2024-07-26 10:23:13.437371] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.373 [2024-07-26 10:23:13.572952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:00.373 [2024-07-26 10:23:13.632543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.373 [2024-07-26 10:23:13.632719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.373 [2024-07-26 10:23:13.632732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.373 [2024-07-26 10:23:13.632740] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.373 [2024-07-26 10:23:13.632921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.373 [2024-07-26 10:23:13.632944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.940 10:23:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:00.940 10:23:14 -- common/autotest_common.sh@852 -- # return 0 00:18:00.940 10:23:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.941 10:23:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:00.941 10:23:14 -- common/autotest_common.sh@10 -- # set +x 00:18:00.941 10:23:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.941 10:23:14 -- host/multipath.sh@33 -- # nvmfapp_pid=83934 00:18:00.941 10:23:14 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.200 [2024-07-26 10:23:14.573909] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.200 10:23:14 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:01.458 Malloc0 00:18:01.458 10:23:14 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:01.716 10:23:15 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.975 10:23:15 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.233 [2024-07-26 10:23:15.484281] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.233 10:23:15 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:02.492 [2024-07-26 10:23:15.696434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:02.492 10:23:15 -- host/multipath.sh@44 -- # bdevperf_pid=83984 00:18:02.492 10:23:15 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:02.492 10:23:15 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.492 10:23:15 -- host/multipath.sh@47 -- # waitforlisten 83984 /var/tmp/bdevperf.sock 00:18:02.492 10:23:15 -- common/autotest_common.sh@819 -- # '[' -z 83984 ']' 00:18:02.492 10:23:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.492 10:23:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:02.492 10:23:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.492 10:23:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:02.492 10:23:15 -- common/autotest_common.sh@10 -- # set +x 00:18:03.429 10:23:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:03.429 10:23:16 -- common/autotest_common.sh@852 -- # return 0 00:18:03.429 10:23:16 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:03.687 10:23:16 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:03.946 Nvme0n1 00:18:03.946 10:23:17 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:04.204 Nvme0n1 00:18:04.204 10:23:17 -- host/multipath.sh@78 -- # sleep 1 00:18:04.205 10:23:17 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:05.582 10:23:18 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:05.582 10:23:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:05.582 10:23:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:05.841 10:23:19 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:05.841 10:23:19 -- host/multipath.sh@65 -- # dtrace_pid=84035 00:18:05.841 10:23:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:05.841 10:23:19 -- host/multipath.sh@66 -- # sleep 6 00:18:12.408 10:23:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:12.408 10:23:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:12.408 10:23:25 -- host/multipath.sh@67 -- # active_port=4421 00:18:12.408 10:23:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.408 Attaching 4 probes... 00:18:12.408 @path[10.0.0.2, 4421]: 15488 00:18:12.408 @path[10.0.0.2, 4421]: 17353 00:18:12.408 @path[10.0.0.2, 4421]: 18826 00:18:12.408 @path[10.0.0.2, 4421]: 17213 00:18:12.408 @path[10.0.0.2, 4421]: 17170 00:18:12.408 10:23:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:12.408 10:23:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:12.408 10:23:25 -- host/multipath.sh@69 -- # sed -n 1p 00:18:12.408 10:23:25 -- host/multipath.sh@69 -- # port=4421 00:18:12.408 10:23:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.408 10:23:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.408 10:23:25 -- host/multipath.sh@72 -- # kill 84035 00:18:12.408 10:23:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.408 10:23:25 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:12.408 10:23:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:12.408 10:23:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:12.667 10:23:25 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:12.667 10:23:25 -- host/multipath.sh@65 -- # dtrace_pid=84149 00:18:12.667 10:23:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:12.667 10:23:25 -- host/multipath.sh@66 -- # sleep 6 00:18:19.230 10:23:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:19.230 10:23:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:19.230 10:23:32 -- host/multipath.sh@67 -- # active_port=4420 00:18:19.230 10:23:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:19.230 Attaching 4 probes... 00:18:19.230 @path[10.0.0.2, 4420]: 17873 00:18:19.230 @path[10.0.0.2, 4420]: 17889 00:18:19.230 @path[10.0.0.2, 4420]: 18627 00:18:19.230 @path[10.0.0.2, 4420]: 18221 00:18:19.230 @path[10.0.0.2, 4420]: 18711 00:18:19.230 10:23:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:19.230 10:23:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:19.230 10:23:32 -- host/multipath.sh@69 -- # sed -n 1p 00:18:19.230 10:23:32 -- host/multipath.sh@69 -- # port=4420 00:18:19.230 10:23:32 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:19.230 10:23:32 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:19.230 10:23:32 -- host/multipath.sh@72 -- # kill 84149 00:18:19.230 10:23:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:19.230 10:23:32 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:19.230 10:23:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:19.230 10:23:32 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:19.230 10:23:32 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:19.230 10:23:32 -- host/multipath.sh@65 -- # dtrace_pid=84262 00:18:19.230 10:23:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:19.230 10:23:32 -- host/multipath.sh@66 -- # sleep 6 00:18:25.791 10:23:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:25.791 10:23:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:25.791 10:23:38 -- host/multipath.sh@67 -- # active_port=4421 00:18:25.791 10:23:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.791 Attaching 4 probes... 00:18:25.791 @path[10.0.0.2, 4421]: 12017 00:18:25.791 @path[10.0.0.2, 4421]: 18298 00:18:25.791 @path[10.0.0.2, 4421]: 20308 00:18:25.791 @path[10.0.0.2, 4421]: 20434 00:18:25.791 @path[10.0.0.2, 4421]: 19702 00:18:25.791 10:23:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:25.791 10:23:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:25.791 10:23:38 -- host/multipath.sh@69 -- # sed -n 1p 00:18:25.791 10:23:38 -- host/multipath.sh@69 -- # port=4421 00:18:25.791 10:23:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.791 10:23:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.791 10:23:38 -- host/multipath.sh@72 -- # kill 84262 00:18:25.791 10:23:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.791 10:23:38 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:25.791 10:23:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:25.791 10:23:39 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:26.049 10:23:39 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:26.049 10:23:39 -- host/multipath.sh@65 -- # dtrace_pid=84380 00:18:26.049 10:23:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:26.049 10:23:39 -- host/multipath.sh@66 -- # sleep 6 00:18:32.609 10:23:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:32.609 10:23:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:32.609 10:23:45 -- host/multipath.sh@67 -- # active_port= 00:18:32.609 10:23:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.609 Attaching 4 probes... 00:18:32.609 00:18:32.609 00:18:32.609 00:18:32.609 00:18:32.609 00:18:32.609 10:23:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:32.609 10:23:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:32.609 10:23:45 -- host/multipath.sh@69 -- # sed -n 1p 00:18:32.609 10:23:45 -- host/multipath.sh@69 -- # port= 00:18:32.609 10:23:45 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:32.609 10:23:45 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:32.609 10:23:45 -- host/multipath.sh@72 -- # kill 84380 00:18:32.609 10:23:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.609 10:23:45 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:32.609 10:23:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:32.609 10:23:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:32.868 10:23:46 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:32.868 10:23:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:32.868 10:23:46 -- host/multipath.sh@65 -- # dtrace_pid=84492 00:18:32.868 10:23:46 -- host/multipath.sh@66 -- # sleep 6 00:18:39.428 10:23:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:39.428 10:23:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:39.428 10:23:52 -- host/multipath.sh@67 -- # active_port=4421 00:18:39.428 10:23:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.428 Attaching 4 probes... 00:18:39.428 @path[10.0.0.2, 4421]: 19322 00:18:39.428 @path[10.0.0.2, 4421]: 19710 00:18:39.428 @path[10.0.0.2, 4421]: 19783 00:18:39.428 @path[10.0.0.2, 4421]: 20254 00:18:39.428 @path[10.0.0.2, 4421]: 20193 00:18:39.428 10:23:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:39.428 10:23:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:39.428 10:23:52 -- host/multipath.sh@69 -- # sed -n 1p 00:18:39.428 10:23:52 -- host/multipath.sh@69 -- # port=4421 00:18:39.428 10:23:52 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.428 10:23:52 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.428 10:23:52 -- host/multipath.sh@72 -- # kill 84492 00:18:39.428 10:23:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.428 10:23:52 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:39.428 [2024-07-26 10:23:52.718504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.428 [2024-07-26 10:23:52.718655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.718996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 [2024-07-26 10:23:52.719073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8610 is same with the state(5) to be set 00:18:39.429 10:23:52 -- host/multipath.sh@101 -- # sleep 1 00:18:40.362 10:23:53 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:40.362 10:23:53 -- host/multipath.sh@65 -- # dtrace_pid=84617 00:18:40.362 10:23:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:40.362 10:23:53 -- host/multipath.sh@66 -- # sleep 6 00:18:46.921 10:23:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:46.921 10:23:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:46.921 10:24:00 -- host/multipath.sh@67 -- # active_port=4420 00:18:46.921 10:24:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.921 Attaching 4 probes... 00:18:46.921 @path[10.0.0.2, 4420]: 18716 00:18:46.921 @path[10.0.0.2, 4420]: 18947 00:18:46.921 @path[10.0.0.2, 4420]: 18899 00:18:46.921 @path[10.0.0.2, 4420]: 19526 00:18:46.921 @path[10.0.0.2, 4420]: 19399 00:18:46.921 10:24:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:46.921 10:24:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:46.921 10:24:00 -- host/multipath.sh@69 -- # sed -n 1p 00:18:46.921 10:24:00 -- host/multipath.sh@69 -- # port=4420 00:18:46.921 10:24:00 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:46.921 10:24:00 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:46.921 10:24:00 -- host/multipath.sh@72 -- # kill 84617 00:18:46.921 10:24:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.921 10:24:00 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:46.921 [2024-07-26 10:24:00.240708] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:46.921 10:24:00 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:47.179 10:24:00 -- host/multipath.sh@111 -- # sleep 6 00:18:53.740 10:24:06 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:53.740 10:24:06 -- host/multipath.sh@65 -- # dtrace_pid=84791 00:18:53.740 10:24:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:53.740 10:24:06 -- host/multipath.sh@66 -- # sleep 6 00:19:00.309 10:24:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:00.309 10:24:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:00.309 10:24:12 -- host/multipath.sh@67 -- # active_port=4421 00:19:00.309 10:24:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.309 Attaching 4 probes... 00:19:00.309 @path[10.0.0.2, 4421]: 18257 00:19:00.309 @path[10.0.0.2, 4421]: 18927 00:19:00.309 @path[10.0.0.2, 4421]: 19802 00:19:00.309 @path[10.0.0.2, 4421]: 19302 00:19:00.309 @path[10.0.0.2, 4421]: 20171 00:19:00.309 10:24:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:00.309 10:24:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:00.309 10:24:12 -- host/multipath.sh@69 -- # sed -n 1p 00:19:00.309 10:24:12 -- host/multipath.sh@69 -- # port=4421 00:19:00.309 10:24:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:00.309 10:24:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:00.309 10:24:12 -- host/multipath.sh@72 -- # kill 84791 00:19:00.309 10:24:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.309 10:24:12 -- host/multipath.sh@114 -- # killprocess 83984 00:19:00.309 10:24:12 -- common/autotest_common.sh@926 -- # '[' -z 83984 ']' 00:19:00.309 10:24:12 -- common/autotest_common.sh@930 -- # kill -0 83984 00:19:00.309 10:24:12 -- common/autotest_common.sh@931 -- # uname 00:19:00.309 10:24:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.309 10:24:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83984 00:19:00.309 10:24:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:00.309 10:24:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:00.309 killing process with pid 83984 00:19:00.309 10:24:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83984' 00:19:00.309 10:24:12 -- common/autotest_common.sh@945 -- # kill 83984 00:19:00.309 10:24:12 -- common/autotest_common.sh@950 -- # wait 83984 00:19:00.309 Connection closed with partial response: 00:19:00.309 00:19:00.309 00:19:00.309 10:24:13 -- host/multipath.sh@116 -- # wait 83984 00:19:00.309 10:24:13 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:00.309 [2024-07-26 10:23:15.765605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:00.309 [2024-07-26 10:23:15.765718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83984 ] 00:19:00.309 [2024-07-26 10:23:15.903831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.309 [2024-07-26 10:23:15.986453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.309 Running I/O for 90 seconds... 00:19:00.309 [2024-07-26 10:23:25.895154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.309 [2024-07-26 10:23:25.895250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.309 [2024-07-26 10:23:25.895377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.309 [2024-07-26 10:23:25.895451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.309 [2024-07-26 10:23:25.895551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.309 [2024-07-26 10:23:25.895602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:00.309 [2024-07-26 10:23:25.895882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.309 [2024-07-26 10:23:25.895896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.895930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.895985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.896936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.896975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.310 [2024-07-26 10:23:25.896990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.310 [2024-07-26 10:23:25.897342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.310 [2024-07-26 10:23:25.897361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.897900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.897994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.898429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.898462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.898816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.311 [2024-07-26 10:23:25.898850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.311 [2024-07-26 10:23:25.898870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.311 [2024-07-26 10:23:25.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.898904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.898918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.898938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.898952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.898987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.899001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.899148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.899216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.899541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.899555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:25.901917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.901987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.902000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:25.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:25.902033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:32.382779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:32.382849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:32.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.312 [2024-07-26 10:23:32.382936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:32.382958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.312 [2024-07-26 10:23:32.382972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:00.312 [2024-07-26 10:23:32.382992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.383698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.383741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.383813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.383886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.383922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.384035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.313 [2024-07-26 10:23:32.384104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:00.313 [2024-07-26 10:23:32.384287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.313 [2024-07-26 10:23:32.384302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.384950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.384969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.385165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.385224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.385706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.385775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.314 [2024-07-26 10:23:32.385807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:00.314 [2024-07-26 10:23:32.385825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.314 [2024-07-26 10:23:32.385839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.385871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.385889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.385902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.385952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.385965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.385984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.385996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.386935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.386956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.386970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.387018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.315 [2024-07-26 10:23:32.387052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.387331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.315 [2024-07-26 10:23:32.387345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:00.315 [2024-07-26 10:23:32.388326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.388917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.388945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.388959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.389017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.389035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.389063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.389077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.389104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.389117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.389144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:32.389158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:32.389185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:32.389238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.404891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.404963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.405126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.405287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.405320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.316 [2024-07-26 10:23:39.405680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:00.316 [2024-07-26 10:23:39.405698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.316 [2024-07-26 10:23:39.405712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.405731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.405746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.405784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.405802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.405823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.405884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.406939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.406984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.407189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.407223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.407294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.317 [2024-07-26 10:23:39.407328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.407362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.317 [2024-07-26 10:23:39.407383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.317 [2024-07-26 10:23:39.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.318 [2024-07-26 10:23:39.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.318 [2024-07-26 10:23:39.407497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.318 [2024-07-26 10:23:39.407532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.318 [2024-07-26 10:23:39.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.318 [2024-07-26 10:23:39.407884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.407899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.407995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.319 [2024-07-26 10:23:39.408598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.408690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.408704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.409540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.409565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.409612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.409628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.409655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.409670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:00.319 [2024-07-26 10:23:39.409709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.319 [2024-07-26 10:23:39.409725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.409768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.409810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.409850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.409892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.409933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.409960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.409974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.410180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.410270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.410311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.410415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.320 [2024-07-26 10:23:39.410609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:39.410639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:39.410655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.719961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.719991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.320 [2024-07-26 10:23:52.720154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.320 [2024-07-26 10:23:52.720169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.720905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.720981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.720995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.721289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.721317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.321 [2024-07-26 10:23:52.721346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.321 [2024-07-26 10:23:52.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.321 [2024-07-26 10:23:52.721375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.721748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.721965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.721984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.722078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.722136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.722393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.722429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.322 [2024-07-26 10:23:52.722488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.322 [2024-07-26 10:23:52.722545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.322 [2024-07-26 10:23:52.722561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.722827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.722885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.722942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.722971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.722986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.323 [2024-07-26 10:23:52.723273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.323 [2024-07-26 10:23:52.723474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbcdd0 is same with the state(5) to be set 00:19:00.323 [2024-07-26 10:23:52.723510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:00.323 [2024-07-26 10:23:52.723521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:00.323 [2024-07-26 10:23:52.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:19:00.323 [2024-07-26 10:23:52.723551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.323 [2024-07-26 10:23:52.723623] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fbcdd0 was disconnected and freed. reset controller. 00:19:00.323 [2024-07-26 10:23:52.724762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.324 [2024-07-26 10:23:52.724848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fca660 (9): Bad file descriptor 00:19:00.324 [2024-07-26 10:23:52.725185] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.324 [2024-07-26 10:23:52.725264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.324 [2024-07-26 10:23:52.725317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.324 [2024-07-26 10:23:52.725340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fca660 with addr=10.0.0.2, port=4421 00:19:00.324 [2024-07-26 10:23:52.725357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fca660 is same with the state(5) to be set 00:19:00.324 [2024-07-26 10:23:52.725390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fca660 (9): Bad file descriptor 00:19:00.324 [2024-07-26 10:23:52.725421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.324 [2024-07-26 10:23:52.725438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.324 [2024-07-26 10:23:52.725453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.324 [2024-07-26 10:23:52.725485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.324 [2024-07-26 10:23:52.725502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.324 [2024-07-26 10:24:02.772689] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.324 Received shutdown signal, test time was about 55.152615 seconds 00:19:00.324 00:19:00.324 Latency(us) 00:19:00.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.324 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.324 Verification LBA range: start 0x0 length 0x4000 00:19:00.324 Nvme0n1 : 55.15 10750.68 41.99 0.00 0.00 11887.74 286.72 7015926.69 00:19:00.324 =================================================================================================================== 00:19:00.324 Total : 10750.68 41.99 0.00 0.00 11887.74 286.72 7015926.69 00:19:00.324 10:24:13 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.324 10:24:13 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:00.324 10:24:13 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:00.324 10:24:13 -- host/multipath.sh@125 -- # nvmftestfini 00:19:00.324 10:24:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:00.324 10:24:13 -- nvmf/common.sh@116 -- # sync 00:19:00.324 10:24:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:00.324 10:24:13 -- nvmf/common.sh@119 -- # set +e 00:19:00.324 10:24:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:00.324 10:24:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:00.324 rmmod nvme_tcp 00:19:00.324 rmmod nvme_fabrics 00:19:00.324 rmmod nvme_keyring 00:19:00.324 10:24:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:00.324 10:24:13 -- nvmf/common.sh@123 -- # set -e 00:19:00.324 10:24:13 -- nvmf/common.sh@124 -- # return 0 00:19:00.324 10:24:13 -- nvmf/common.sh@477 -- # '[' -n 83934 ']' 00:19:00.324 10:24:13 -- nvmf/common.sh@478 -- # killprocess 83934 00:19:00.324 10:24:13 -- common/autotest_common.sh@926 -- # '[' -z 83934 ']' 00:19:00.324 10:24:13 -- common/autotest_common.sh@930 -- # kill -0 83934 00:19:00.324 10:24:13 -- common/autotest_common.sh@931 -- # uname 00:19:00.324 10:24:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.324 10:24:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83934 00:19:00.324 10:24:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:00.324 10:24:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:00.324 killing process with pid 83934 00:19:00.324 10:24:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83934' 00:19:00.324 10:24:13 -- common/autotest_common.sh@945 -- # kill 83934 00:19:00.324 10:24:13 -- common/autotest_common.sh@950 -- # wait 83934 00:19:00.324 10:24:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:00.324 10:24:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:00.324 10:24:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:00.324 10:24:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.324 10:24:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:00.324 10:24:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.324 10:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.324 10:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.324 10:24:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:00.324 00:19:00.324 real 1m0.780s 00:19:00.324 user 2m47.393s 00:19:00.324 sys 0m19.001s 00:19:00.324 10:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.324 ************************************ 00:19:00.324 END TEST nvmf_multipath 00:19:00.324 10:24:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 ************************************ 00:19:00.583 10:24:13 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:00.583 10:24:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:00.583 10:24:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:00.583 10:24:13 -- common/autotest_common.sh@10 -- # set +x 00:19:00.583 ************************************ 00:19:00.583 START TEST nvmf_timeout 00:19:00.583 ************************************ 00:19:00.583 10:24:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:00.583 * Looking for test storage... 00:19:00.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:00.583 10:24:13 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.583 10:24:13 -- nvmf/common.sh@7 -- # uname -s 00:19:00.583 10:24:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.583 10:24:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.583 10:24:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.583 10:24:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.583 10:24:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.583 10:24:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.583 10:24:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.583 10:24:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.583 10:24:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.583 10:24:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:19:00.583 10:24:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:19:00.583 10:24:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.583 10:24:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.583 10:24:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.583 10:24:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.583 10:24:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.583 10:24:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.583 10:24:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.583 10:24:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.583 10:24:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.583 10:24:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.583 10:24:13 -- paths/export.sh@5 -- # export PATH 00:19:00.583 10:24:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.583 10:24:13 -- nvmf/common.sh@46 -- # : 0 00:19:00.583 10:24:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.583 10:24:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.583 10:24:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.583 10:24:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.583 10:24:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.583 10:24:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.583 10:24:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.583 10:24:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.583 10:24:13 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.583 10:24:13 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.583 10:24:13 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.583 10:24:13 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:00.583 10:24:13 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.583 10:24:13 -- host/timeout.sh@19 -- # nvmftestinit 00:19:00.583 10:24:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:00.583 10:24:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.583 10:24:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.583 10:24:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.583 10:24:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.583 10:24:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.583 10:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.583 10:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.583 10:24:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:00.583 10:24:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:00.583 10:24:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.583 10:24:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.583 10:24:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.583 10:24:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:00.584 10:24:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.584 10:24:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.584 10:24:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.584 10:24:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.584 10:24:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.584 10:24:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.584 10:24:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.584 10:24:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.584 10:24:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:00.584 10:24:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:00.584 Cannot find device "nvmf_tgt_br" 00:19:00.584 10:24:13 -- nvmf/common.sh@154 -- # true 00:19:00.584 10:24:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.584 Cannot find device "nvmf_tgt_br2" 00:19:00.584 10:24:13 -- nvmf/common.sh@155 -- # true 00:19:00.584 10:24:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:00.584 10:24:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:00.584 Cannot find device "nvmf_tgt_br" 00:19:00.584 10:24:13 -- nvmf/common.sh@157 -- # true 00:19:00.584 10:24:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:00.584 Cannot find device "nvmf_tgt_br2" 00:19:00.584 10:24:13 -- nvmf/common.sh@158 -- # true 00:19:00.584 10:24:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:00.584 10:24:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:00.584 10:24:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.584 10:24:14 -- nvmf/common.sh@161 -- # true 00:19:00.584 10:24:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.584 10:24:14 -- nvmf/common.sh@162 -- # true 00:19:00.584 10:24:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.842 10:24:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.842 10:24:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.842 10:24:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.842 10:24:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.842 10:24:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.842 10:24:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.842 10:24:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.842 10:24:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.842 10:24:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:00.842 10:24:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:00.842 10:24:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:00.842 10:24:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:00.842 10:24:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.842 10:24:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.842 10:24:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.842 10:24:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:00.842 10:24:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:00.842 10:24:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.843 10:24:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.843 10:24:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.843 10:24:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.843 10:24:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.843 10:24:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:00.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:19:00.843 00:19:00.843 --- 10.0.0.2 ping statistics --- 00:19:00.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.843 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:00.843 10:24:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:00.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:19:00.843 00:19:00.843 --- 10.0.0.3 ping statistics --- 00:19:00.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.843 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:00.843 10:24:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:19:00.843 00:19:00.843 --- 10.0.0.1 ping statistics --- 00:19:00.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.843 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:19:00.843 10:24:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.843 10:24:14 -- nvmf/common.sh@421 -- # return 0 00:19:00.843 10:24:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:00.843 10:24:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.843 10:24:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.843 10:24:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.843 10:24:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.843 10:24:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.843 10:24:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.843 10:24:14 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:00.843 10:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.843 10:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:00.843 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.843 10:24:14 -- nvmf/common.sh@469 -- # nvmfpid=85110 00:19:00.843 10:24:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:00.843 10:24:14 -- nvmf/common.sh@470 -- # waitforlisten 85110 00:19:00.843 10:24:14 -- common/autotest_common.sh@819 -- # '[' -z 85110 ']' 00:19:00.843 10:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.843 10:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:00.843 10:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.843 10:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:00.843 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:19:00.843 [2024-07-26 10:24:14.278362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:00.843 [2024-07-26 10:24:14.278467] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.101 [2024-07-26 10:24:14.414966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:01.101 [2024-07-26 10:24:14.501243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.101 [2024-07-26 10:24:14.501411] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.101 [2024-07-26 10:24:14.501423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.101 [2024-07-26 10:24:14.501431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.101 [2024-07-26 10:24:14.501645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.101 [2024-07-26 10:24:14.501652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.036 10:24:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.036 10:24:15 -- common/autotest_common.sh@852 -- # return 0 00:19:02.036 10:24:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.036 10:24:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.036 10:24:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.036 10:24:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.036 10:24:15 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:02.036 10:24:15 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:02.036 [2024-07-26 10:24:15.489144] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.294 10:24:15 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:02.553 Malloc0 00:19:02.553 10:24:15 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.553 10:24:16 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.811 10:24:16 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.069 [2024-07-26 10:24:16.456200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.069 10:24:16 -- host/timeout.sh@32 -- # bdevperf_pid=85159 00:19:03.069 10:24:16 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:03.069 10:24:16 -- host/timeout.sh@34 -- # waitforlisten 85159 /var/tmp/bdevperf.sock 00:19:03.069 10:24:16 -- common/autotest_common.sh@819 -- # '[' -z 85159 ']' 00:19:03.069 10:24:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.069 10:24:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.069 10:24:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.069 10:24:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:03.069 10:24:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 [2024-07-26 10:24:16.516565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:03.069 [2024-07-26 10:24:16.516677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85159 ] 00:19:03.327 [2024-07-26 10:24:16.651583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.327 [2024-07-26 10:24:16.740129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.261 10:24:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:04.261 10:24:17 -- common/autotest_common.sh@852 -- # return 0 00:19:04.261 10:24:17 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:04.261 10:24:17 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:04.519 NVMe0n1 00:19:04.519 10:24:17 -- host/timeout.sh@51 -- # rpc_pid=85177 00:19:04.519 10:24:17 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.519 10:24:17 -- host/timeout.sh@53 -- # sleep 1 00:19:04.778 Running I/O for 10 seconds... 00:19:05.713 10:24:18 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.973 [2024-07-26 10:24:19.204147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.973 [2024-07-26 10:24:19.204359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271970 is same with the state(5) to be set 00:19:05.974 [2024-07-26 10:24:19.204591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.205678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.205689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.974 [2024-07-26 10:24:19.206667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.206949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.974 [2024-07-26 10:24:19.206960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.974 [2024-07-26 10:24:19.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.207235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.207681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.207746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.207766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.207786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.207797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.207806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.208203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.208263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.208765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.208776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.975 [2024-07-26 10:24:19.209904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.975 [2024-07-26 10:24:19.209962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.975 [2024-07-26 10:24:19.209971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.209982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.209991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.210011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.210875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.210884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.211804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.211957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.211966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.212827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.212968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.213121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.213278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.213553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.213657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.976 [2024-07-26 10:24:19.213670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.213682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.213691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.976 [2024-07-26 10:24:19.213702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.976 [2024-07-26 10:24:19.213711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.213731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.213742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.213750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.213887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.213984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.214474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.214869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.214891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.214911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.214952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.977 [2024-07-26 10:24:19.215203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.215363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.977 [2024-07-26 10:24:19.215489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac2f0 is same with the state(5) to be set 00:19:05.977 [2024-07-26 10:24:19.215529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:05.977 [2024-07-26 10:24:19.215769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:05.977 [2024-07-26 10:24:19.215780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127128 len:8 PRP1 0x0 PRP2 0x0 00:19:05.977 [2024-07-26 10:24:19.215791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.215847] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ac2f0 was disconnected and freed. reset controller. 00:19:05.977 [2024-07-26 10:24:19.216132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.977 [2024-07-26 10:24:19.216157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.216168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.977 [2024-07-26 10:24:19.216177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.216187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.977 [2024-07-26 10:24:19.216196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.216206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.977 [2024-07-26 10:24:19.216214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.977 [2024-07-26 10:24:19.216513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1be0 is same with the state(5) to be set 00:19:05.977 [2024-07-26 10:24:19.217002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.977 [2024-07-26 10:24:19.217037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1be0 (9): Bad file descriptor 00:19:05.977 [2024-07-26 10:24:19.217324] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.977 [2024-07-26 10:24:19.217405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.977 [2024-07-26 10:24:19.217714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.977 [2024-07-26 10:24:19.217744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1be0 with addr=10.0.0.2, port=4420 00:19:05.977 [2024-07-26 10:24:19.217757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1be0 is same with the state(5) to be set 00:19:05.977 [2024-07-26 10:24:19.217778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1be0 (9): Bad file descriptor 00:19:05.977 [2024-07-26 10:24:19.217794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.977 [2024-07-26 10:24:19.217804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:05.977 [2024-07-26 10:24:19.217914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.977 [2024-07-26 10:24:19.218063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.977 [2024-07-26 10:24:19.218200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.977 10:24:19 -- host/timeout.sh@56 -- # sleep 2 00:19:07.882 [2024-07-26 10:24:21.218457] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.882 [2024-07-26 10:24:21.218634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.882 [2024-07-26 10:24:21.218683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.882 [2024-07-26 10:24:21.218701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1be0 with addr=10.0.0.2, port=4420 00:19:07.882 [2024-07-26 10:24:21.218714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1be0 is same with the state(5) to be set 00:19:07.882 [2024-07-26 10:24:21.218743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1be0 (9): Bad file descriptor 00:19:07.882 [2024-07-26 10:24:21.218762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.882 [2024-07-26 10:24:21.219038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.882 [2024-07-26 10:24:21.219063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.882 [2024-07-26 10:24:21.219094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.882 [2024-07-26 10:24:21.219107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.882 10:24:21 -- host/timeout.sh@57 -- # get_controller 00:19:07.882 10:24:21 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:07.882 10:24:21 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:08.140 10:24:21 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:08.140 10:24:21 -- host/timeout.sh@58 -- # get_bdev 00:19:08.140 10:24:21 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:08.140 10:24:21 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:08.400 10:24:21 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:08.400 10:24:21 -- host/timeout.sh@61 -- # sleep 5 00:19:09.776 [2024-07-26 10:24:23.219239] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.776 [2024-07-26 10:24:23.219353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.776 [2024-07-26 10:24:23.219396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.776 [2024-07-26 10:24:23.219412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1be0 with addr=10.0.0.2, port=4420 00:19:09.776 [2024-07-26 10:24:23.219424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1be0 is same with the state(5) to be set 00:19:09.776 [2024-07-26 10:24:23.219449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1be0 (9): Bad file descriptor 00:19:09.776 [2024-07-26 10:24:23.219468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.776 [2024-07-26 10:24:23.219477] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:09.776 [2024-07-26 10:24:23.219488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.776 [2024-07-26 10:24:23.219515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:09.776 [2024-07-26 10:24:23.219527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.309 [2024-07-26 10:24:25.219570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.309 [2024-07-26 10:24:25.219664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.309 [2024-07-26 10:24:25.219678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:12.309 [2024-07-26 10:24:25.219689] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:12.309 [2024-07-26 10:24:25.219717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.876 00:19:12.876 Latency(us) 00:19:12.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.876 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.876 Verification LBA range: start 0x0 length 0x4000 00:19:12.876 NVMe0n1 : 8.15 1944.04 7.59 15.71 0.00 65344.91 2993.80 7046430.72 00:19:12.876 =================================================================================================================== 00:19:12.876 Total : 1944.04 7.59 15.71 0.00 65344.91 2993.80 7046430.72 00:19:12.876 0 00:19:13.442 10:24:26 -- host/timeout.sh@62 -- # get_controller 00:19:13.442 10:24:26 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:13.442 10:24:26 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:13.699 10:24:26 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:13.699 10:24:26 -- host/timeout.sh@63 -- # get_bdev 00:19:13.699 10:24:26 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:13.699 10:24:26 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:13.957 10:24:27 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:13.957 10:24:27 -- host/timeout.sh@65 -- # wait 85177 00:19:13.958 10:24:27 -- host/timeout.sh@67 -- # killprocess 85159 00:19:13.958 10:24:27 -- common/autotest_common.sh@926 -- # '[' -z 85159 ']' 00:19:13.958 10:24:27 -- common/autotest_common.sh@930 -- # kill -0 85159 00:19:13.958 10:24:27 -- common/autotest_common.sh@931 -- # uname 00:19:13.958 10:24:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:13.958 10:24:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85159 00:19:13.958 killing process with pid 85159 00:19:13.958 Received shutdown signal, test time was about 9.151738 seconds 00:19:13.958 00:19:13.958 Latency(us) 00:19:13.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.958 =================================================================================================================== 00:19:13.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.958 10:24:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:13.958 10:24:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:13.958 10:24:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85159' 00:19:13.958 10:24:27 -- common/autotest_common.sh@945 -- # kill 85159 00:19:13.958 10:24:27 -- common/autotest_common.sh@950 -- # wait 85159 00:19:13.958 10:24:27 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.216 [2024-07-26 10:24:27.656607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.474 10:24:27 -- host/timeout.sh@74 -- # bdevperf_pid=85299 00:19:14.474 10:24:27 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:14.474 10:24:27 -- host/timeout.sh@76 -- # waitforlisten 85299 /var/tmp/bdevperf.sock 00:19:14.474 10:24:27 -- common/autotest_common.sh@819 -- # '[' -z 85299 ']' 00:19:14.474 10:24:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.474 10:24:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.474 10:24:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.474 10:24:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.474 10:24:27 -- common/autotest_common.sh@10 -- # set +x 00:19:14.474 [2024-07-26 10:24:27.716070] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:14.474 [2024-07-26 10:24:27.716168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85299 ] 00:19:14.474 [2024-07-26 10:24:27.849289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.732 [2024-07-26 10:24:27.933941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.299 10:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.299 10:24:28 -- common/autotest_common.sh@852 -- # return 0 00:19:15.299 10:24:28 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:15.558 10:24:28 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:15.816 NVMe0n1 00:19:15.816 10:24:29 -- host/timeout.sh@84 -- # rpc_pid=85322 00:19:15.816 10:24:29 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.816 10:24:29 -- host/timeout.sh@86 -- # sleep 1 00:19:15.816 Running I/O for 10 seconds... 00:19:16.751 10:24:30 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.013 [2024-07-26 10:24:30.370446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.371965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.372958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.373016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.373075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287e50 is same with the state(5) to be set 00:19:17.013 [2024-07-26 10:24:30.373320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.373353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.373479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.373492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.373774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.373811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.373946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.013 [2024-07-26 10:24:30.374724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.013 [2024-07-26 10:24:30.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.374836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.374846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.374857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.374866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.374877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.375293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.375446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.375910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.375967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.375977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.376228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.376450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.376482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.376501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.014 [2024-07-26 10:24:30.376519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.376655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.376806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.376950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.014 [2024-07-26 10:24:30.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.014 [2024-07-26 10:24:30.377726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.377837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.377897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.377954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.377985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.377994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.015 [2024-07-26 10:24:30.378328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.015 [2024-07-26 10:24:30.378378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.015 [2024-07-26 10:24:30.378387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.378977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.378987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.378996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.379007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.379016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.379026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.016 [2024-07-26 10:24:30.379035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.016 [2024-07-26 10:24:30.379045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.016 [2024-07-26 10:24:30.379054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.017 [2024-07-26 10:24:30.379133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.017 [2024-07-26 10:24:30.379233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e90 is same with the state(5) to be set 00:19:17.017 [2024-07-26 10:24:30.379255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:17.017 [2024-07-26 10:24:30.379263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:17.017 [2024-07-26 10:24:30.379271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128488 len:8 PRP1 0x0 PRP2 0x0 00:19:17.017 [2024-07-26 10:24:30.379280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379331] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb4e90 was disconnected and freed. reset controller. 00:19:17.017 [2024-07-26 10:24:30.379422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.017 [2024-07-26 10:24:30.379439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.017 [2024-07-26 10:24:30.379459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.017 [2024-07-26 10:24:30.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.017 [2024-07-26 10:24:30.379496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.017 [2024-07-26 10:24:30.379505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:17.017 [2024-07-26 10:24:30.379772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.017 [2024-07-26 10:24:30.379796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:17.017 [2024-07-26 10:24:30.379895] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.017 [2024-07-26 10:24:30.379957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.017 [2024-07-26 10:24:30.380020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.017 [2024-07-26 10:24:30.380035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:17.017 [2024-07-26 10:24:30.380046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:17.017 [2024-07-26 10:24:30.380065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:17.017 [2024-07-26 10:24:30.380081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.017 [2024-07-26 10:24:30.380090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.017 [2024-07-26 10:24:30.380100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.017 [2024-07-26 10:24:30.380120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.017 [2024-07-26 10:24:30.380130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.017 10:24:30 -- host/timeout.sh@90 -- # sleep 1 00:19:17.957 [2024-07-26 10:24:31.380264] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.957 [2024-07-26 10:24:31.380374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.957 [2024-07-26 10:24:31.380417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.957 [2024-07-26 10:24:31.380449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:17.957 [2024-07-26 10:24:31.380462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:17.957 [2024-07-26 10:24:31.380487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:17.957 [2024-07-26 10:24:31.380506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.957 [2024-07-26 10:24:31.380515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.957 [2024-07-26 10:24:31.380526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.957 [2024-07-26 10:24:31.380552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.957 [2024-07-26 10:24:31.380563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.957 10:24:31 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.233 [2024-07-26 10:24:31.626447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.233 10:24:31 -- host/timeout.sh@92 -- # wait 85322 00:19:19.171 [2024-07-26 10:24:32.400923] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.296 00:19:27.296 Latency(us) 00:19:27.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.296 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.296 Verification LBA range: start 0x0 length 0x4000 00:19:27.296 NVMe0n1 : 10.01 10183.62 39.78 0.00 0.00 12543.11 774.52 3019898.88 00:19:27.296 =================================================================================================================== 00:19:27.296 Total : 10183.62 39.78 0.00 0.00 12543.11 774.52 3019898.88 00:19:27.296 0 00:19:27.296 10:24:39 -- host/timeout.sh@97 -- # rpc_pid=85427 00:19:27.296 10:24:39 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.296 10:24:39 -- host/timeout.sh@98 -- # sleep 1 00:19:27.296 Running I/O for 10 seconds... 00:19:27.296 10:24:40 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.296 [2024-07-26 10:24:40.516828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.516980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22765e0 is same with the state(5) to be set 00:19:27.296 [2024-07-26 10:24:40.517042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.296 [2024-07-26 10:24:40.517331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.296 [2024-07-26 10:24:40.517368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.296 [2024-07-26 10:24:40.517424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.296 [2024-07-26 10:24:40.517462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.296 [2024-07-26 10:24:40.517547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.296 [2024-07-26 10:24:40.517555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.517986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.517994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.518883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.518902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.518987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.297 [2024-07-26 10:24:40.518996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.519006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.519015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.297 [2024-07-26 10:24:40.519025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.297 [2024-07-26 10:24:40.519033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.298 [2024-07-26 10:24:40.519774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.298 [2024-07-26 10:24:40.519835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.298 [2024-07-26 10:24:40.519846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.519866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.519886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.519911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.519931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.519981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.519990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.299 [2024-07-26 10:24:40.520008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.299 [2024-07-26 10:24:40.520032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.299 [2024-07-26 10:24:40.520123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.299 [2024-07-26 10:24:40.520142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.299 [2024-07-26 10:24:40.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.299 [2024-07-26 10:24:40.520312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8fcf0 is same with the state(5) to be set 00:19:27.299 [2024-07-26 10:24:40.520338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.299 [2024-07-26 10:24:40.520346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.299 [2024-07-26 10:24:40.520354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123528 len:8 PRP1 0x0 PRP2 0x0 00:19:27.299 [2024-07-26 10:24:40.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.299 [2024-07-26 10:24:40.520414] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe8fcf0 was disconnected and freed. reset controller. 00:19:27.299 [2024-07-26 10:24:40.520629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.299 [2024-07-26 10:24:40.520702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:27.299 [2024-07-26 10:24:40.520811] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.299 [2024-07-26 10:24:40.520859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.299 [2024-07-26 10:24:40.520896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.299 [2024-07-26 10:24:40.520910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:27.299 [2024-07-26 10:24:40.520920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:27.299 [2024-07-26 10:24:40.520938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:27.299 [2024-07-26 10:24:40.520953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.299 [2024-07-26 10:24:40.520961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.299 [2024-07-26 10:24:40.520971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.299 [2024-07-26 10:24:40.520990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:27.299 [2024-07-26 10:24:40.521000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.299 10:24:40 -- host/timeout.sh@101 -- # sleep 3 00:19:28.234 [2024-07-26 10:24:41.521121] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.234 [2024-07-26 10:24:41.521221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.234 [2024-07-26 10:24:41.521262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.234 [2024-07-26 10:24:41.521278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:28.234 [2024-07-26 10:24:41.521291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:28.234 [2024-07-26 10:24:41.521316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:28.234 [2024-07-26 10:24:41.521334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.234 [2024-07-26 10:24:41.521343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.234 [2024-07-26 10:24:41.521354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.234 [2024-07-26 10:24:41.521381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.234 [2024-07-26 10:24:41.521392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.169 [2024-07-26 10:24:42.521518] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.169 [2024-07-26 10:24:42.521659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.169 [2024-07-26 10:24:42.521705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.169 [2024-07-26 10:24:42.521722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:29.169 [2024-07-26 10:24:42.521736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:29.169 [2024-07-26 10:24:42.521761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:29.169 [2024-07-26 10:24:42.521797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.169 [2024-07-26 10:24:42.521809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.169 [2024-07-26 10:24:42.521821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.169 [2024-07-26 10:24:42.521848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.169 [2024-07-26 10:24:42.521860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.103 [2024-07-26 10:24:43.524104] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.103 [2024-07-26 10:24:43.524200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.103 [2024-07-26 10:24:43.524240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.103 [2024-07-26 10:24:43.524256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe86a20 with addr=10.0.0.2, port=4420 00:19:30.103 [2024-07-26 10:24:43.524269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe86a20 is same with the state(5) to be set 00:19:30.103 [2024-07-26 10:24:43.524425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe86a20 (9): Bad file descriptor 00:19:30.103 [2024-07-26 10:24:43.524574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.103 [2024-07-26 10:24:43.524602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.103 [2024-07-26 10:24:43.524630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.103 [2024-07-26 10:24:43.527355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.103 [2024-07-26 10:24:43.527400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.103 10:24:43 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.361 [2024-07-26 10:24:43.772720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.361 10:24:43 -- host/timeout.sh@103 -- # wait 85427 00:19:31.295 [2024-07-26 10:24:44.558726] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:36.564 00:19:36.564 Latency(us) 00:19:36.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.564 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.564 Verification LBA range: start 0x0 length 0x4000 00:19:36.564 NVMe0n1 : 10.01 8365.08 32.68 6313.16 0.00 8706.06 424.49 3019898.88 00:19:36.564 =================================================================================================================== 00:19:36.564 Total : 8365.08 32.68 6313.16 0.00 8706.06 0.00 3019898.88 00:19:36.564 0 00:19:36.564 10:24:49 -- host/timeout.sh@105 -- # killprocess 85299 00:19:36.564 10:24:49 -- common/autotest_common.sh@926 -- # '[' -z 85299 ']' 00:19:36.564 10:24:49 -- common/autotest_common.sh@930 -- # kill -0 85299 00:19:36.564 10:24:49 -- common/autotest_common.sh@931 -- # uname 00:19:36.564 10:24:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:36.564 10:24:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85299 00:19:36.564 killing process with pid 85299 00:19:36.564 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.564 00:19:36.564 Latency(us) 00:19:36.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.564 =================================================================================================================== 00:19:36.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.564 10:24:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:36.564 10:24:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:36.564 10:24:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85299' 00:19:36.564 10:24:49 -- common/autotest_common.sh@945 -- # kill 85299 00:19:36.564 10:24:49 -- common/autotest_common.sh@950 -- # wait 85299 00:19:36.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.564 10:24:49 -- host/timeout.sh@110 -- # bdevperf_pid=85547 00:19:36.564 10:24:49 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:36.564 10:24:49 -- host/timeout.sh@112 -- # waitforlisten 85547 /var/tmp/bdevperf.sock 00:19:36.564 10:24:49 -- common/autotest_common.sh@819 -- # '[' -z 85547 ']' 00:19:36.564 10:24:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.564 10:24:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.564 10:24:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.564 10:24:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.564 10:24:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.564 [2024-07-26 10:24:49.703349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:36.564 [2024-07-26 10:24:49.703448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85547 ] 00:19:36.564 [2024-07-26 10:24:49.840589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.564 [2024-07-26 10:24:49.926907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.500 10:24:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.500 10:24:50 -- common/autotest_common.sh@852 -- # return 0 00:19:37.500 10:24:50 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85547 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:37.500 10:24:50 -- host/timeout.sh@116 -- # dtrace_pid=85557 00:19:37.500 10:24:50 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:37.500 10:24:50 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:37.758 NVMe0n1 00:19:37.758 10:24:51 -- host/timeout.sh@124 -- # rpc_pid=85599 00:19:37.758 10:24:51 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.758 10:24:51 -- host/timeout.sh@125 -- # sleep 1 00:19:38.017 Running I/O for 10 seconds... 00:19:38.956 10:24:52 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.956 [2024-07-26 10:24:52.397990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.398982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.398991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.956 [2024-07-26 10:24:52.399949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.956 [2024-07-26 10:24:52.399961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.399972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.399993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.957 [2024-07-26 10:24:52.400918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.957 [2024-07-26 10:24:52.400927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.400938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.958 [2024-07-26 10:24:52.401928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.401940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a5040 is same with the state(5) to be set 00:19:38.958 [2024-07-26 10:24:52.401953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:38.958 [2024-07-26 10:24:52.401966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:38.958 [2024-07-26 10:24:52.401975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72912 len:8 PRP1 0x0 PRP2 0x0 00:19:38.958 [2024-07-26 10:24:52.401984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.958 [2024-07-26 10:24:52.402039] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5a5040 was disconnected and freed. reset controller. 00:19:38.958 [2024-07-26 10:24:52.402324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.958 [2024-07-26 10:24:52.402406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x576a40 (9): Bad file descriptor 00:19:38.958 [2024-07-26 10:24:52.402511] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.958 [2024-07-26 10:24:52.402965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.958 [2024-07-26 10:24:52.403222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.958 [2024-07-26 10:24:52.403441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x576a40 with addr=10.0.0.2, port=4420 00:19:38.958 [2024-07-26 10:24:52.403852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x576a40 is same with the state(5) to be set 00:19:38.958 [2024-07-26 10:24:52.404310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x576a40 (9): Bad file descriptor 00:19:38.958 [2024-07-26 10:24:52.404835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.958 [2024-07-26 10:24:52.405160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.958 [2024-07-26 10:24:52.405547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.958 [2024-07-26 10:24:52.405775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.958 [2024-07-26 10:24:52.406030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.217 10:24:52 -- host/timeout.sh@128 -- # wait 85599 00:19:41.133 [2024-07-26 10:24:54.406259] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.133 [2024-07-26 10:24:54.406846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.133 [2024-07-26 10:24:54.407151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.133 [2024-07-26 10:24:54.407381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x576a40 with addr=10.0.0.2, port=4420 00:19:41.133 [2024-07-26 10:24:54.407824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x576a40 is same with the state(5) to be set 00:19:41.133 [2024-07-26 10:24:54.408243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x576a40 (9): Bad file descriptor 00:19:41.133 [2024-07-26 10:24:54.408685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.133 [2024-07-26 10:24:54.409079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.133 [2024-07-26 10:24:54.409502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.133 [2024-07-26 10:24:54.409731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.133 [2024-07-26 10:24:54.409958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.090 [2024-07-26 10:24:56.410644] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.090 [2024-07-26 10:24:56.411224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.090 [2024-07-26 10:24:56.411538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.091 [2024-07-26 10:24:56.411564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x576a40 with addr=10.0.0.2, port=4420 00:19:43.091 [2024-07-26 10:24:56.411626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x576a40 is same with the state(5) to be set 00:19:43.091 [2024-07-26 10:24:56.411663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x576a40 (9): Bad file descriptor 00:19:43.091 [2024-07-26 10:24:56.411683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.091 [2024-07-26 10:24:56.411693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.091 [2024-07-26 10:24:56.411704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.091 [2024-07-26 10:24:56.411733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.091 [2024-07-26 10:24:56.411745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.991 [2024-07-26 10:24:58.411820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.991 [2024-07-26 10:24:58.411885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.991 [2024-07-26 10:24:58.411912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.991 [2024-07-26 10:24:58.411923] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:44.991 [2024-07-26 10:24:58.411952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.362 00:19:46.362 Latency(us) 00:19:46.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.362 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:46.362 NVMe0n1 : 8.12 2204.58 8.61 15.76 0.00 57592.91 7298.33 7015926.69 00:19:46.362 =================================================================================================================== 00:19:46.362 Total : 2204.58 8.61 15.76 0.00 57592.91 7298.33 7015926.69 00:19:46.362 0 00:19:46.362 10:24:59 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:46.362 Attaching 5 probes... 00:19:46.362 1216.066142: reset bdev controller NVMe0 00:19:46.362 1216.200854: reconnect bdev controller NVMe0 00:19:46.362 3219.840662: reconnect delay bdev controller NVMe0 00:19:46.362 3219.874751: reconnect bdev controller NVMe0 00:19:46.362 5224.215561: reconnect delay bdev controller NVMe0 00:19:46.362 5224.254034: reconnect bdev controller NVMe0 00:19:46.362 7225.542712: reconnect delay bdev controller NVMe0 00:19:46.362 7225.564959: reconnect bdev controller NVMe0 00:19:46.362 10:24:59 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:46.362 10:24:59 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:46.362 10:24:59 -- host/timeout.sh@136 -- # kill 85557 00:19:46.362 10:24:59 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:46.362 10:24:59 -- host/timeout.sh@139 -- # killprocess 85547 00:19:46.362 10:24:59 -- common/autotest_common.sh@926 -- # '[' -z 85547 ']' 00:19:46.362 10:24:59 -- common/autotest_common.sh@930 -- # kill -0 85547 00:19:46.362 10:24:59 -- common/autotest_common.sh@931 -- # uname 00:19:46.362 10:24:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:46.362 10:24:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85547 00:19:46.362 10:24:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:46.362 10:24:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:46.362 killing process with pid 85547 00:19:46.363 Received shutdown signal, test time was about 8.184808 seconds 00:19:46.363 00:19:46.363 Latency(us) 00:19:46.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.363 =================================================================================================================== 00:19:46.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.363 10:24:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85547' 00:19:46.363 10:24:59 -- common/autotest_common.sh@945 -- # kill 85547 00:19:46.363 10:24:59 -- common/autotest_common.sh@950 -- # wait 85547 00:19:46.363 10:24:59 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.620 10:24:59 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:46.620 10:24:59 -- host/timeout.sh@145 -- # nvmftestfini 00:19:46.620 10:24:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.620 10:24:59 -- nvmf/common.sh@116 -- # sync 00:19:46.620 10:24:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:46.620 10:24:59 -- nvmf/common.sh@119 -- # set +e 00:19:46.620 10:24:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.620 10:24:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:46.620 rmmod nvme_tcp 00:19:46.620 rmmod nvme_fabrics 00:19:46.620 rmmod nvme_keyring 00:19:46.620 10:24:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.620 10:24:59 -- nvmf/common.sh@123 -- # set -e 00:19:46.620 10:24:59 -- nvmf/common.sh@124 -- # return 0 00:19:46.620 10:24:59 -- nvmf/common.sh@477 -- # '[' -n 85110 ']' 00:19:46.620 10:24:59 -- nvmf/common.sh@478 -- # killprocess 85110 00:19:46.620 10:24:59 -- common/autotest_common.sh@926 -- # '[' -z 85110 ']' 00:19:46.620 10:24:59 -- common/autotest_common.sh@930 -- # kill -0 85110 00:19:46.620 10:24:59 -- common/autotest_common.sh@931 -- # uname 00:19:46.620 10:24:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:46.620 10:24:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85110 00:19:46.620 10:25:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:46.620 killing process with pid 85110 00:19:46.620 10:25:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:46.620 10:25:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85110' 00:19:46.620 10:25:00 -- common/autotest_common.sh@945 -- # kill 85110 00:19:46.620 10:25:00 -- common/autotest_common.sh@950 -- # wait 85110 00:19:46.878 10:25:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.878 10:25:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.878 10:25:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.878 10:25:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.878 10:25:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.878 10:25:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.878 10:25:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.878 10:25:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.878 10:25:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:46.878 00:19:46.878 real 0m46.468s 00:19:46.878 user 2m15.950s 00:19:46.878 sys 0m5.631s 00:19:46.878 10:25:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.878 ************************************ 00:19:46.878 END TEST nvmf_timeout 00:19:46.878 ************************************ 00:19:46.878 10:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:46.878 10:25:00 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:46.878 10:25:00 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:46.878 10:25:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:46.878 10:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.136 10:25:00 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:47.136 ************************************ 00:19:47.136 END TEST nvmf_tcp 00:19:47.136 ************************************ 00:19:47.136 00:19:47.136 real 10m34.602s 00:19:47.136 user 29m37.511s 00:19:47.136 sys 3m21.035s 00:19:47.136 10:25:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.136 10:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.136 10:25:00 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:19:47.136 10:25:00 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:47.136 10:25:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:47.136 10:25:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.136 10:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.136 ************************************ 00:19:47.136 START TEST nvmf_dif 00:19:47.136 ************************************ 00:19:47.136 10:25:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:47.136 * Looking for test storage... 00:19:47.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:47.136 10:25:00 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.136 10:25:00 -- nvmf/common.sh@7 -- # uname -s 00:19:47.137 10:25:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.137 10:25:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.137 10:25:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.137 10:25:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.137 10:25:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.137 10:25:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.137 10:25:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.137 10:25:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.137 10:25:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.137 10:25:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:19:47.137 10:25:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:19:47.137 10:25:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.137 10:25:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.137 10:25:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.137 10:25:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.137 10:25:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.137 10:25:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.137 10:25:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.137 10:25:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.137 10:25:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.137 10:25:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.137 10:25:00 -- paths/export.sh@5 -- # export PATH 00:19:47.137 10:25:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.137 10:25:00 -- nvmf/common.sh@46 -- # : 0 00:19:47.137 10:25:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.137 10:25:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.137 10:25:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.137 10:25:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.137 10:25:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.137 10:25:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.137 10:25:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.137 10:25:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.137 10:25:00 -- target/dif.sh@15 -- # NULL_META=16 00:19:47.137 10:25:00 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:47.137 10:25:00 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:47.137 10:25:00 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:47.137 10:25:00 -- target/dif.sh@135 -- # nvmftestinit 00:19:47.137 10:25:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.137 10:25:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.137 10:25:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.137 10:25:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.137 10:25:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.137 10:25:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.137 10:25:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:47.137 10:25:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.137 10:25:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.137 10:25:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.137 10:25:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.137 10:25:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.137 10:25:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.137 10:25:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.137 10:25:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.137 10:25:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.137 10:25:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.137 10:25:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.137 10:25:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.137 10:25:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.137 10:25:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.137 10:25:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.137 10:25:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.137 10:25:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.137 Cannot find device "nvmf_tgt_br" 00:19:47.137 10:25:00 -- nvmf/common.sh@154 -- # true 00:19:47.137 10:25:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.137 Cannot find device "nvmf_tgt_br2" 00:19:47.137 10:25:00 -- nvmf/common.sh@155 -- # true 00:19:47.137 10:25:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.137 10:25:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.137 Cannot find device "nvmf_tgt_br" 00:19:47.137 10:25:00 -- nvmf/common.sh@157 -- # true 00:19:47.137 10:25:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.396 Cannot find device "nvmf_tgt_br2" 00:19:47.396 10:25:00 -- nvmf/common.sh@158 -- # true 00:19:47.396 10:25:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.396 10:25:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.396 10:25:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.396 10:25:00 -- nvmf/common.sh@161 -- # true 00:19:47.396 10:25:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.396 10:25:00 -- nvmf/common.sh@162 -- # true 00:19:47.396 10:25:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.396 10:25:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.396 10:25:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.396 10:25:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.396 10:25:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.396 10:25:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.396 10:25:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.396 10:25:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.396 10:25:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.396 10:25:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:47.396 10:25:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:47.396 10:25:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:47.396 10:25:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:47.396 10:25:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.396 10:25:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.396 10:25:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.396 10:25:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:47.396 10:25:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:47.396 10:25:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.396 10:25:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.396 10:25:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.396 10:25:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.396 10:25:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.396 10:25:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:47.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:47.396 00:19:47.396 --- 10.0.0.2 ping statistics --- 00:19:47.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.396 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:47.396 10:25:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:47.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:19:47.396 00:19:47.396 --- 10.0.0.3 ping statistics --- 00:19:47.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.396 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:47.396 10:25:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:47.655 00:19:47.655 --- 10.0.0.1 ping statistics --- 00:19:47.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.655 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:47.655 10:25:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.655 10:25:00 -- nvmf/common.sh@421 -- # return 0 00:19:47.655 10:25:00 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:47.655 10:25:00 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.913 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:47.913 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:47.913 10:25:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.913 10:25:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:47.913 10:25:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:47.913 10:25:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.913 10:25:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:47.913 10:25:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.914 10:25:01 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:47.914 10:25:01 -- target/dif.sh@137 -- # nvmfappstart 00:19:47.914 10:25:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:47.914 10:25:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:47.914 10:25:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 10:25:01 -- nvmf/common.sh@469 -- # nvmfpid=86049 00:19:47.914 10:25:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:47.914 10:25:01 -- nvmf/common.sh@470 -- # waitforlisten 86049 00:19:47.914 10:25:01 -- common/autotest_common.sh@819 -- # '[' -z 86049 ']' 00:19:47.914 10:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.914 10:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:47.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.914 10:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.914 10:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:47.914 10:25:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 [2024-07-26 10:25:01.334088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:47.914 [2024-07-26 10:25:01.334184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.172 [2024-07-26 10:25:01.471251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.172 [2024-07-26 10:25:01.561135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:48.172 [2024-07-26 10:25:01.561311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.172 [2024-07-26 10:25:01.561326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.172 [2024-07-26 10:25:01.561337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.172 [2024-07-26 10:25:01.561371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.107 10:25:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.107 10:25:02 -- common/autotest_common.sh@852 -- # return 0 00:19:49.107 10:25:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.107 10:25:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:49.107 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.107 10:25:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.107 10:25:02 -- target/dif.sh@139 -- # create_transport 00:19:49.107 10:25:02 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:49.107 10:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.107 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.107 [2024-07-26 10:25:02.385554] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.107 10:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.107 10:25:02 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:49.107 10:25:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.107 10:25:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.107 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.107 ************************************ 00:19:49.107 START TEST fio_dif_1_default 00:19:49.107 ************************************ 00:19:49.107 10:25:02 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:19:49.108 10:25:02 -- target/dif.sh@86 -- # create_subsystems 0 00:19:49.108 10:25:02 -- target/dif.sh@28 -- # local sub 00:19:49.108 10:25:02 -- target/dif.sh@30 -- # for sub in "$@" 00:19:49.108 10:25:02 -- target/dif.sh@31 -- # create_subsystem 0 00:19:49.108 10:25:02 -- target/dif.sh@18 -- # local sub_id=0 00:19:49.108 10:25:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:49.108 10:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.108 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.108 bdev_null0 00:19:49.108 10:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.108 10:25:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:49.108 10:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.108 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.108 10:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.108 10:25:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:49.108 10:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.108 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.108 10:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.108 10:25:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:49.108 10:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.108 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.108 [2024-07-26 10:25:02.433672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.108 10:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.108 10:25:02 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:49.108 10:25:02 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:49.108 10:25:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:49.108 10:25:02 -- nvmf/common.sh@520 -- # config=() 00:19:49.108 10:25:02 -- nvmf/common.sh@520 -- # local subsystem config 00:19:49.108 10:25:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:49.108 10:25:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:49.108 10:25:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:49.108 { 00:19:49.108 "params": { 00:19:49.108 "name": "Nvme$subsystem", 00:19:49.108 "trtype": "$TEST_TRANSPORT", 00:19:49.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.108 "adrfam": "ipv4", 00:19:49.108 "trsvcid": "$NVMF_PORT", 00:19:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.108 "hdgst": ${hdgst:-false}, 00:19:49.108 "ddgst": ${ddgst:-false} 00:19:49.108 }, 00:19:49.108 "method": "bdev_nvme_attach_controller" 00:19:49.108 } 00:19:49.108 EOF 00:19:49.108 )") 00:19:49.108 10:25:02 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:49.108 10:25:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:49.108 10:25:02 -- target/dif.sh@82 -- # gen_fio_conf 00:19:49.108 10:25:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:49.108 10:25:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:49.108 10:25:02 -- target/dif.sh@54 -- # local file 00:19:49.108 10:25:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.108 10:25:02 -- common/autotest_common.sh@1320 -- # shift 00:19:49.108 10:25:02 -- target/dif.sh@56 -- # cat 00:19:49.108 10:25:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:49.108 10:25:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.108 10:25:02 -- nvmf/common.sh@542 -- # cat 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:49.108 10:25:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:49.108 10:25:02 -- target/dif.sh@72 -- # (( file <= files )) 00:19:49.108 10:25:02 -- nvmf/common.sh@544 -- # jq . 00:19:49.108 10:25:02 -- nvmf/common.sh@545 -- # IFS=, 00:19:49.108 10:25:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:49.108 "params": { 00:19:49.108 "name": "Nvme0", 00:19:49.108 "trtype": "tcp", 00:19:49.108 "traddr": "10.0.0.2", 00:19:49.108 "adrfam": "ipv4", 00:19:49.108 "trsvcid": "4420", 00:19:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:49.108 "hdgst": false, 00:19:49.108 "ddgst": false 00:19:49.108 }, 00:19:49.108 "method": "bdev_nvme_attach_controller" 00:19:49.108 }' 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:49.108 10:25:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:49.108 10:25:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:49.108 10:25:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:49.108 10:25:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:49.108 10:25:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:49.108 10:25:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:49.366 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:49.366 fio-3.35 00:19:49.366 Starting 1 thread 00:19:49.624 [2024-07-26 10:25:03.015313] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:49.624 [2024-07-26 10:25:03.015400] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:01.873 00:20:01.873 filename0: (groupid=0, jobs=1): err= 0: pid=86116: Fri Jul 26 10:25:13 2024 00:20:01.873 read: IOPS=9509, BW=37.1MiB/s (38.9MB/s)(371MiB/10001msec) 00:20:01.873 slat (nsec): min=5814, max=94814, avg=8117.17, stdev=3847.48 00:20:01.873 clat (usec): min=313, max=4238, avg=396.63, stdev=47.37 00:20:01.873 lat (usec): min=319, max=4306, avg=404.75, stdev=48.28 00:20:01.873 clat percentiles (usec): 00:20:01.873 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:20:01.873 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:20:01.873 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 469], 00:20:01.873 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 603], 00:20:01.873 | 99.99th=[ 1369] 00:20:01.873 bw ( KiB/s): min=34848, max=39520, per=100.00%, avg=38061.47, stdev=1179.59, samples=19 00:20:01.873 iops : min= 8712, max= 9880, avg=9515.37, stdev=294.90, samples=19 00:20:01.873 lat (usec) : 500=98.46%, 750=1.52%, 1000=0.01% 00:20:01.873 lat (msec) : 2=0.01%, 10=0.01% 00:20:01.873 cpu : usr=84.82%, sys=12.89%, ctx=13, majf=0, minf=8 00:20:01.873 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.873 issued rwts: total=95100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.873 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:01.873 00:20:01.873 Run status group 0 (all jobs): 00:20:01.873 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=371MiB (390MB), run=10001-10001msec 00:20:01.873 10:25:13 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:01.873 10:25:13 -- target/dif.sh@43 -- # local sub 00:20:01.873 10:25:13 -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.873 10:25:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:01.873 10:25:13 -- target/dif.sh@36 -- # local sub_id=0 00:20:01.873 10:25:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 00:20:01.873 real 0m10.950s 00:20:01.873 user 0m9.073s 00:20:01.873 sys 0m1.568s 00:20:01.873 10:25:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 ************************************ 00:20:01.873 END TEST fio_dif_1_default 00:20:01.873 ************************************ 00:20:01.873 10:25:13 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:01.873 10:25:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:01.873 10:25:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 ************************************ 00:20:01.873 START TEST fio_dif_1_multi_subsystems 00:20:01.873 ************************************ 00:20:01.873 10:25:13 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:20:01.873 10:25:13 -- target/dif.sh@92 -- # local files=1 00:20:01.873 10:25:13 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:01.873 10:25:13 -- target/dif.sh@28 -- # local sub 00:20:01.873 10:25:13 -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.873 10:25:13 -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.873 10:25:13 -- target/dif.sh@18 -- # local sub_id=0 00:20:01.873 10:25:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 bdev_null0 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 [2024-07-26 10:25:13.435298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.873 10:25:13 -- target/dif.sh@31 -- # create_subsystem 1 00:20:01.873 10:25:13 -- target/dif.sh@18 -- # local sub_id=1 00:20:01.873 10:25:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 bdev_null1 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.873 10:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.873 10:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.873 10:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.873 10:25:13 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:01.873 10:25:13 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:01.873 10:25:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:01.873 10:25:13 -- nvmf/common.sh@520 -- # config=() 00:20:01.873 10:25:13 -- nvmf/common.sh@520 -- # local subsystem config 00:20:01.873 10:25:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:01.873 10:25:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.873 10:25:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:01.873 { 00:20:01.873 "params": { 00:20:01.873 "name": "Nvme$subsystem", 00:20:01.873 "trtype": "$TEST_TRANSPORT", 00:20:01.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.873 "adrfam": "ipv4", 00:20:01.873 "trsvcid": "$NVMF_PORT", 00:20:01.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.873 "hdgst": ${hdgst:-false}, 00:20:01.873 "ddgst": ${ddgst:-false} 00:20:01.873 }, 00:20:01.873 "method": "bdev_nvme_attach_controller" 00:20:01.873 } 00:20:01.873 EOF 00:20:01.873 )") 00:20:01.873 10:25:13 -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.874 10:25:13 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.874 10:25:13 -- target/dif.sh@54 -- # local file 00:20:01.874 10:25:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:01.874 10:25:13 -- target/dif.sh@56 -- # cat 00:20:01.874 10:25:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.874 10:25:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:01.874 10:25:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.874 10:25:13 -- common/autotest_common.sh@1320 -- # shift 00:20:01.874 10:25:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:01.874 10:25:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.874 10:25:13 -- nvmf/common.sh@542 -- # cat 00:20:01.874 10:25:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.874 10:25:13 -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.874 10:25:13 -- target/dif.sh@73 -- # cat 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:01.874 10:25:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:01.874 10:25:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:01.874 { 00:20:01.874 "params": { 00:20:01.874 "name": "Nvme$subsystem", 00:20:01.874 "trtype": "$TEST_TRANSPORT", 00:20:01.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.874 "adrfam": "ipv4", 00:20:01.874 "trsvcid": "$NVMF_PORT", 00:20:01.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.874 "hdgst": ${hdgst:-false}, 00:20:01.874 "ddgst": ${ddgst:-false} 00:20:01.874 }, 00:20:01.874 "method": "bdev_nvme_attach_controller" 00:20:01.874 } 00:20:01.874 EOF 00:20:01.874 )") 00:20:01.874 10:25:13 -- nvmf/common.sh@542 -- # cat 00:20:01.874 10:25:13 -- target/dif.sh@72 -- # (( file++ )) 00:20:01.874 10:25:13 -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.874 10:25:13 -- nvmf/common.sh@544 -- # jq . 00:20:01.874 10:25:13 -- nvmf/common.sh@545 -- # IFS=, 00:20:01.874 10:25:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:01.874 "params": { 00:20:01.874 "name": "Nvme0", 00:20:01.874 "trtype": "tcp", 00:20:01.874 "traddr": "10.0.0.2", 00:20:01.874 "adrfam": "ipv4", 00:20:01.874 "trsvcid": "4420", 00:20:01.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.874 "hdgst": false, 00:20:01.874 "ddgst": false 00:20:01.874 }, 00:20:01.874 "method": "bdev_nvme_attach_controller" 00:20:01.874 },{ 00:20:01.874 "params": { 00:20:01.874 "name": "Nvme1", 00:20:01.874 "trtype": "tcp", 00:20:01.874 "traddr": "10.0.0.2", 00:20:01.874 "adrfam": "ipv4", 00:20:01.874 "trsvcid": "4420", 00:20:01.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.874 "hdgst": false, 00:20:01.874 "ddgst": false 00:20:01.874 }, 00:20:01.874 "method": "bdev_nvme_attach_controller" 00:20:01.874 }' 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:01.874 10:25:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:01.874 10:25:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:01.874 10:25:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:01.874 10:25:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:01.874 10:25:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.874 10:25:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.874 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:01.874 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:01.874 fio-3.35 00:20:01.874 Starting 2 threads 00:20:01.874 [2024-07-26 10:25:14.128904] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:01.874 [2024-07-26 10:25:14.128988] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:11.874 00:20:11.874 filename0: (groupid=0, jobs=1): err= 0: pid=86279: Fri Jul 26 10:25:24 2024 00:20:11.874 read: IOPS=5220, BW=20.4MiB/s (21.4MB/s)(204MiB/10001msec) 00:20:11.874 slat (nsec): min=6401, max=88154, avg=12714.44, stdev=4739.93 00:20:11.874 clat (usec): min=429, max=2303, avg=731.50, stdev=55.49 00:20:11.874 lat (usec): min=436, max=2314, avg=744.22, stdev=56.47 00:20:11.874 clat percentiles (usec): 00:20:11.874 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:20:11.874 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 725], 60.00th=[ 742], 00:20:11.874 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 824], 00:20:11.874 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 947], 00:20:11.874 | 99.99th=[ 1172] 00:20:11.874 bw ( KiB/s): min=20480, max=21696, per=50.09%, avg=20919.58, stdev=376.04, samples=19 00:20:11.874 iops : min= 5120, max= 5424, avg=5229.89, stdev=94.01, samples=19 00:20:11.874 lat (usec) : 500=0.02%, 750=65.66%, 1000=34.30% 00:20:11.874 lat (msec) : 2=0.02%, 4=0.01% 00:20:11.874 cpu : usr=89.11%, sys=9.07%, ctx=18, majf=0, minf=0 00:20:11.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 issued rwts: total=52212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:11.874 filename1: (groupid=0, jobs=1): err= 0: pid=86280: Fri Jul 26 10:25:24 2024 00:20:11.874 read: IOPS=5219, BW=20.4MiB/s (21.4MB/s)(204MiB/10001msec) 00:20:11.874 slat (usec): min=6, max=304, avg=12.88, stdev= 5.43 00:20:11.874 clat (usec): min=499, max=2301, avg=730.62, stdev=51.44 00:20:11.874 lat (usec): min=532, max=2314, avg=743.50, stdev=52.03 00:20:11.874 clat percentiles (usec): 00:20:11.874 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:20:11.874 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 725], 60.00th=[ 742], 00:20:11.874 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 816], 00:20:11.874 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 1012], 00:20:11.874 | 99.99th=[ 1221] 00:20:11.874 bw ( KiB/s): min=20480, max=21696, per=50.08%, avg=20916.21, stdev=371.99, samples=19 00:20:11.874 iops : min= 5120, max= 5424, avg=5229.05, stdev=93.00, samples=19 00:20:11.874 lat (usec) : 500=0.01%, 750=68.43%, 1000=31.51% 00:20:11.874 lat (msec) : 2=0.05%, 4=0.01% 00:20:11.874 cpu : usr=88.98%, sys=9.06%, ctx=54, majf=0, minf=0 00:20:11.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 issued rwts: total=52204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:11.874 00:20:11.874 Run status group 0 (all jobs): 00:20:11.874 READ: bw=40.8MiB/s (42.8MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=408MiB (428MB), run=10001-10001msec 00:20:11.874 10:25:24 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:11.874 10:25:24 -- target/dif.sh@43 -- # local sub 00:20:11.874 10:25:24 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.874 10:25:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:11.874 10:25:24 -- target/dif.sh@36 -- # local sub_id=0 00:20:11.874 10:25:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.874 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.874 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.874 10:25:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:11.874 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.874 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.874 10:25:24 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.874 10:25:24 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:11.874 10:25:24 -- target/dif.sh@36 -- # local sub_id=1 00:20:11.874 10:25:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 10:25:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 ************************************ 00:20:11.875 END TEST fio_dif_1_multi_subsystems 00:20:11.875 ************************************ 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 00:20:11.875 real 0m11.079s 00:20:11.875 user 0m18.536s 00:20:11.875 sys 0m2.083s 00:20:11.875 10:25:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 10:25:24 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:11.875 10:25:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:11.875 10:25:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 ************************************ 00:20:11.875 START TEST fio_dif_rand_params 00:20:11.875 ************************************ 00:20:11.875 10:25:24 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:20:11.875 10:25:24 -- target/dif.sh@100 -- # local NULL_DIF 00:20:11.875 10:25:24 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:11.875 10:25:24 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:11.875 10:25:24 -- target/dif.sh@103 -- # bs=128k 00:20:11.875 10:25:24 -- target/dif.sh@103 -- # numjobs=3 00:20:11.875 10:25:24 -- target/dif.sh@103 -- # iodepth=3 00:20:11.875 10:25:24 -- target/dif.sh@103 -- # runtime=5 00:20:11.875 10:25:24 -- target/dif.sh@105 -- # create_subsystems 0 00:20:11.875 10:25:24 -- target/dif.sh@28 -- # local sub 00:20:11.875 10:25:24 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.875 10:25:24 -- target/dif.sh@31 -- # create_subsystem 0 00:20:11.875 10:25:24 -- target/dif.sh@18 -- # local sub_id=0 00:20:11.875 10:25:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 bdev_null0 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 10:25:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 10:25:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 10:25:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:11.875 10:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.875 10:25:24 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 [2024-07-26 10:25:24.568265] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.875 10:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.875 10:25:24 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:11.875 10:25:24 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:11.875 10:25:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:11.875 10:25:24 -- nvmf/common.sh@520 -- # config=() 00:20:11.875 10:25:24 -- nvmf/common.sh@520 -- # local subsystem config 00:20:11.875 10:25:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:11.875 10:25:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:11.875 { 00:20:11.875 "params": { 00:20:11.875 "name": "Nvme$subsystem", 00:20:11.875 "trtype": "$TEST_TRANSPORT", 00:20:11.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.875 "adrfam": "ipv4", 00:20:11.875 "trsvcid": "$NVMF_PORT", 00:20:11.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.875 "hdgst": ${hdgst:-false}, 00:20:11.875 "ddgst": ${ddgst:-false} 00:20:11.875 }, 00:20:11.875 "method": "bdev_nvme_attach_controller" 00:20:11.875 } 00:20:11.875 EOF 00:20:11.875 )") 00:20:11.875 10:25:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.875 10:25:24 -- target/dif.sh@82 -- # gen_fio_conf 00:20:11.875 10:25:24 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.875 10:25:24 -- target/dif.sh@54 -- # local file 00:20:11.875 10:25:24 -- target/dif.sh@56 -- # cat 00:20:11.875 10:25:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:11.875 10:25:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.875 10:25:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:11.875 10:25:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.875 10:25:24 -- common/autotest_common.sh@1320 -- # shift 00:20:11.875 10:25:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:11.875 10:25:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.875 10:25:24 -- nvmf/common.sh@542 -- # cat 00:20:11.875 10:25:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:11.875 10:25:24 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:11.875 10:25:24 -- nvmf/common.sh@544 -- # jq . 00:20:11.875 10:25:24 -- nvmf/common.sh@545 -- # IFS=, 00:20:11.875 10:25:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:11.875 "params": { 00:20:11.875 "name": "Nvme0", 00:20:11.875 "trtype": "tcp", 00:20:11.875 "traddr": "10.0.0.2", 00:20:11.875 "adrfam": "ipv4", 00:20:11.875 "trsvcid": "4420", 00:20:11.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.875 "hdgst": false, 00:20:11.875 "ddgst": false 00:20:11.875 }, 00:20:11.875 "method": "bdev_nvme_attach_controller" 00:20:11.875 }' 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:11.875 10:25:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:11.875 10:25:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:11.875 10:25:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:11.875 10:25:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:11.875 10:25:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.875 10:25:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.875 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:11.875 ... 00:20:11.875 fio-3.35 00:20:11.875 Starting 3 threads 00:20:11.875 [2024-07-26 10:25:25.162012] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:11.875 [2024-07-26 10:25:25.162114] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:17.154 00:20:17.154 filename0: (groupid=0, jobs=1): err= 0: pid=86431: Fri Jul 26 10:25:30 2024 00:20:17.154 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5010msec) 00:20:17.154 slat (nsec): min=5225, max=64086, avg=15001.51, stdev=6123.90 00:20:17.154 clat (usec): min=9830, max=14919, avg=10926.31, stdev=363.39 00:20:17.154 lat (usec): min=9837, max=14958, avg=10941.31, stdev=364.26 00:20:17.154 clat percentiles (usec): 00:20:17.154 | 1.00th=[10159], 5.00th=[10421], 10.00th=[10421], 20.00th=[10683], 00:20:17.154 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:17.154 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11338], 00:20:17.154 | 99.00th=[11600], 99.50th=[11600], 99.90th=[14877], 99.95th=[14877], 00:20:17.154 | 99.99th=[14877] 00:20:17.154 bw ( KiB/s): min=33724, max=36096, per=33.31%, avg=35006.70, stdev=743.31, samples=10 00:20:17.154 iops : min= 263, max= 282, avg=273.40, stdev= 5.83, samples=10 00:20:17.154 lat (msec) : 10=0.58%, 20=99.42% 00:20:17.154 cpu : usr=91.36%, sys=7.83%, ctx=12, majf=0, minf=9 00:20:17.154 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.154 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.154 filename0: (groupid=0, jobs=1): err= 0: pid=86432: Fri Jul 26 10:25:30 2024 00:20:17.154 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5005msec) 00:20:17.154 slat (nsec): min=6858, max=65346, avg=15632.52, stdev=5396.65 00:20:17.154 clat (usec): min=7919, max=13507, avg=10916.00, stdev=360.74 00:20:17.154 lat (usec): min=7929, max=13532, avg=10931.63, stdev=361.42 00:20:17.154 clat percentiles (usec): 00:20:17.154 | 1.00th=[10028], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:20:17.154 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:17.154 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11338], 00:20:17.154 | 99.00th=[11469], 99.50th=[11600], 99.90th=[13435], 99.95th=[13566], 00:20:17.154 | 99.99th=[13566] 00:20:17.154 bw ( KiB/s): min=34560, max=36096, per=33.37%, avg=35071.56, stdev=645.07, samples=9 00:20:17.154 iops : min= 270, max= 282, avg=273.89, stdev= 5.01, samples=9 00:20:17.154 lat (msec) : 10=0.66%, 20=99.34% 00:20:17.154 cpu : usr=90.75%, sys=8.45%, ctx=11, majf=0, minf=9 00:20:17.154 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.154 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.154 filename0: (groupid=0, jobs=1): err= 0: pid=86433: Fri Jul 26 10:25:30 2024 00:20:17.154 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5007msec) 00:20:17.154 slat (nsec): min=6924, max=58115, avg=15452.04, stdev=5381.17 00:20:17.154 clat (usec): min=9851, max=12755, avg=10921.29, stdev=322.26 00:20:17.154 lat (usec): min=9863, max=12776, avg=10936.74, stdev=322.92 00:20:17.154 clat percentiles (usec): 00:20:17.154 | 1.00th=[10028], 5.00th=[10421], 10.00th=[10421], 20.00th=[10683], 00:20:17.154 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:17.154 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11338], 00:20:17.154 | 99.00th=[11469], 99.50th=[11600], 99.90th=[12780], 99.95th=[12780], 00:20:17.154 | 99.99th=[12780] 00:20:17.154 bw ( KiB/s): min=34491, max=36096, per=33.36%, avg=35056.22, stdev=658.40, samples=9 00:20:17.154 iops : min= 269, max= 282, avg=273.78, stdev= 5.12, samples=9 00:20:17.154 lat (msec) : 10=0.44%, 20=99.56% 00:20:17.154 cpu : usr=92.09%, sys=7.13%, ctx=4, majf=0, minf=9 00:20:17.154 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.155 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.155 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:17.155 00:20:17.155 Run status group 0 (all jobs): 00:20:17.155 READ: bw=103MiB/s (108MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=514MiB (539MB), run=5005-5010msec 00:20:17.155 10:25:30 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:17.155 10:25:30 -- target/dif.sh@43 -- # local sub 00:20:17.155 10:25:30 -- target/dif.sh@45 -- # for sub in "$@" 00:20:17.155 10:25:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:17.155 10:25:30 -- target/dif.sh@36 -- # local sub_id=0 00:20:17.155 10:25:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # bs=4k 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # numjobs=8 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # iodepth=16 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # runtime= 00:20:17.155 10:25:30 -- target/dif.sh@109 -- # files=2 00:20:17.155 10:25:30 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:17.155 10:25:30 -- target/dif.sh@28 -- # local sub 00:20:17.155 10:25:30 -- target/dif.sh@30 -- # for sub in "$@" 00:20:17.155 10:25:30 -- target/dif.sh@31 -- # create_subsystem 0 00:20:17.155 10:25:30 -- target/dif.sh@18 -- # local sub_id=0 00:20:17.155 10:25:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 bdev_null0 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 [2024-07-26 10:25:30.556826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@30 -- # for sub in "$@" 00:20:17.155 10:25:30 -- target/dif.sh@31 -- # create_subsystem 1 00:20:17.155 10:25:30 -- target/dif.sh@18 -- # local sub_id=1 00:20:17.155 10:25:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 bdev_null1 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.155 10:25:30 -- target/dif.sh@30 -- # for sub in "$@" 00:20:17.155 10:25:30 -- target/dif.sh@31 -- # create_subsystem 2 00:20:17.155 10:25:30 -- target/dif.sh@18 -- # local sub_id=2 00:20:17.155 10:25:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:17.155 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.155 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.421 bdev_null2 00:20:17.421 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.421 10:25:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:17.421 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.421 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.421 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.421 10:25:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:17.421 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.421 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.421 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.421 10:25:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:17.421 10:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.421 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.421 10:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.421 10:25:30 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:17.421 10:25:30 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:17.421 10:25:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:17.421 10:25:30 -- nvmf/common.sh@520 -- # config=() 00:20:17.421 10:25:30 -- nvmf/common.sh@520 -- # local subsystem config 00:20:17.421 10:25:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:17.421 { 00:20:17.421 "params": { 00:20:17.421 "name": "Nvme$subsystem", 00:20:17.421 "trtype": "$TEST_TRANSPORT", 00:20:17.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.421 "adrfam": "ipv4", 00:20:17.421 "trsvcid": "$NVMF_PORT", 00:20:17.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.421 "hdgst": ${hdgst:-false}, 00:20:17.421 "ddgst": ${ddgst:-false} 00:20:17.421 }, 00:20:17.421 "method": "bdev_nvme_attach_controller" 00:20:17.421 } 00:20:17.421 EOF 00:20:17.421 )") 00:20:17.421 10:25:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.421 10:25:30 -- target/dif.sh@82 -- # gen_fio_conf 00:20:17.421 10:25:30 -- target/dif.sh@54 -- # local file 00:20:17.421 10:25:30 -- target/dif.sh@56 -- # cat 00:20:17.421 10:25:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.421 10:25:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # cat 00:20:17.421 10:25:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.421 10:25:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:17.421 10:25:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.421 10:25:30 -- common/autotest_common.sh@1320 -- # shift 00:20:17.421 10:25:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file <= files )) 00:20:17.421 10:25:30 -- target/dif.sh@73 -- # cat 00:20:17.421 10:25:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.421 10:25:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:17.421 { 00:20:17.421 "params": { 00:20:17.421 "name": "Nvme$subsystem", 00:20:17.421 "trtype": "$TEST_TRANSPORT", 00:20:17.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.421 "adrfam": "ipv4", 00:20:17.421 "trsvcid": "$NVMF_PORT", 00:20:17.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.421 "hdgst": ${hdgst:-false}, 00:20:17.421 "ddgst": ${ddgst:-false} 00:20:17.421 }, 00:20:17.421 "method": "bdev_nvme_attach_controller" 00:20:17.421 } 00:20:17.421 EOF 00:20:17.421 )") 00:20:17.421 10:25:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file++ )) 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file <= files )) 00:20:17.421 10:25:30 -- target/dif.sh@73 -- # cat 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # cat 00:20:17.421 10:25:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:17.421 10:25:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file++ )) 00:20:17.421 10:25:30 -- target/dif.sh@72 -- # (( file <= files )) 00:20:17.421 10:25:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:17.421 { 00:20:17.421 "params": { 00:20:17.421 "name": "Nvme$subsystem", 00:20:17.421 "trtype": "$TEST_TRANSPORT", 00:20:17.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.421 "adrfam": "ipv4", 00:20:17.421 "trsvcid": "$NVMF_PORT", 00:20:17.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.421 "hdgst": ${hdgst:-false}, 00:20:17.421 "ddgst": ${ddgst:-false} 00:20:17.421 }, 00:20:17.421 "method": "bdev_nvme_attach_controller" 00:20:17.421 } 00:20:17.421 EOF 00:20:17.421 )") 00:20:17.421 10:25:30 -- nvmf/common.sh@542 -- # cat 00:20:17.421 10:25:30 -- nvmf/common.sh@544 -- # jq . 00:20:17.421 10:25:30 -- nvmf/common.sh@545 -- # IFS=, 00:20:17.421 10:25:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:17.421 "params": { 00:20:17.421 "name": "Nvme0", 00:20:17.421 "trtype": "tcp", 00:20:17.421 "traddr": "10.0.0.2", 00:20:17.421 "adrfam": "ipv4", 00:20:17.421 "trsvcid": "4420", 00:20:17.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:17.422 "hdgst": false, 00:20:17.422 "ddgst": false 00:20:17.422 }, 00:20:17.422 "method": "bdev_nvme_attach_controller" 00:20:17.422 },{ 00:20:17.422 "params": { 00:20:17.422 "name": "Nvme1", 00:20:17.422 "trtype": "tcp", 00:20:17.422 "traddr": "10.0.0.2", 00:20:17.422 "adrfam": "ipv4", 00:20:17.422 "trsvcid": "4420", 00:20:17.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.422 "hdgst": false, 00:20:17.422 "ddgst": false 00:20:17.422 }, 00:20:17.422 "method": "bdev_nvme_attach_controller" 00:20:17.422 },{ 00:20:17.422 "params": { 00:20:17.422 "name": "Nvme2", 00:20:17.422 "trtype": "tcp", 00:20:17.422 "traddr": "10.0.0.2", 00:20:17.422 "adrfam": "ipv4", 00:20:17.422 "trsvcid": "4420", 00:20:17.422 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:17.422 "hdgst": false, 00:20:17.422 "ddgst": false 00:20:17.422 }, 00:20:17.422 "method": "bdev_nvme_attach_controller" 00:20:17.422 }' 00:20:17.422 10:25:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:17.422 10:25:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:17.422 10:25:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.422 10:25:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.422 10:25:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:17.422 10:25:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:17.422 10:25:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:17.422 10:25:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:17.422 10:25:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:17.422 10:25:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:17.422 ... 00:20:17.422 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:17.422 ... 00:20:17.422 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:17.422 ... 00:20:17.422 fio-3.35 00:20:17.422 Starting 24 threads 00:20:17.989 [2024-07-26 10:25:31.363906] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:17.989 [2024-07-26 10:25:31.364025] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:30.194 00:20:30.194 filename0: (groupid=0, jobs=1): err= 0: pid=86529: Fri Jul 26 10:25:41 2024 00:20:30.194 read: IOPS=225, BW=901KiB/s (922kB/s)(9056KiB/10054msec) 00:20:30.194 slat (usec): min=4, max=8023, avg=24.97, stdev=260.49 00:20:30.194 clat (msec): min=3, max=134, avg=70.79, stdev=22.60 00:20:30.194 lat (msec): min=3, max=134, avg=70.81, stdev=22.60 00:20:30.194 clat percentiles (msec): 00:20:30.194 | 1.00th=[ 5], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 52], 00:20:30.194 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:20:30.194 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 107], 00:20:30.194 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 130], 99.95th=[ 133], 00:20:30.194 | 99.99th=[ 136] 00:20:30.194 bw ( KiB/s): min= 688, max= 1624, per=4.34%, avg=901.20, stdev=204.15, samples=20 00:20:30.194 iops : min= 172, max= 406, avg=225.25, stdev=51.05, samples=20 00:20:30.194 lat (msec) : 4=0.09%, 10=1.33%, 20=1.41%, 50=15.37%, 100=69.26% 00:20:30.194 lat (msec) : 250=12.54% 00:20:30.194 cpu : usr=42.65%, sys=2.20%, ctx=1447, majf=0, minf=0 00:20:30.194 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:30.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.194 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.194 filename0: (groupid=0, jobs=1): err= 0: pid=86530: Fri Jul 26 10:25:41 2024 00:20:30.194 read: IOPS=217, BW=868KiB/s (889kB/s)(8696KiB/10015msec) 00:20:30.194 slat (usec): min=4, max=12036, avg=27.13, stdev=310.09 00:20:30.194 clat (msec): min=31, max=143, avg=73.56, stdev=21.60 00:20:30.194 lat (msec): min=31, max=143, avg=73.58, stdev=21.60 00:20:30.194 clat percentiles (msec): 00:20:30.194 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 54], 00:20:30.194 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:20:30.194 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 110], 00:20:30.194 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 144], 00:20:30.194 | 99.99th=[ 144] 00:20:30.194 bw ( KiB/s): min= 544, max= 1080, per=4.17%, avg=865.60, stdev=162.50, samples=20 00:20:30.194 iops : min= 136, max= 270, avg=216.40, stdev=40.62, samples=20 00:20:30.194 lat (msec) : 50=17.25%, 100=68.12%, 250=14.63% 00:20:30.194 cpu : usr=39.18%, sys=2.29%, ctx=1423, majf=0, minf=9 00:20:30.194 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:30.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.194 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.194 filename0: (groupid=0, jobs=1): err= 0: pid=86531: Fri Jul 26 10:25:41 2024 00:20:30.194 read: IOPS=215, BW=860KiB/s (881kB/s)(8628KiB/10030msec) 00:20:30.194 slat (usec): min=4, max=8027, avg=21.59, stdev=184.76 00:20:30.194 clat (msec): min=33, max=144, avg=74.28, stdev=20.67 00:20:30.194 lat (msec): min=33, max=144, avg=74.30, stdev=20.67 00:20:30.194 clat percentiles (msec): 00:20:30.194 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:20:30.194 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:30.194 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:30.194 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:20:30.194 | 99.99th=[ 144] 00:20:30.194 bw ( KiB/s): min= 512, max= 1048, per=4.12%, avg=856.05, stdev=138.15, samples=20 00:20:30.194 iops : min= 128, max= 262, avg=213.95, stdev=34.56, samples=20 00:20:30.194 lat (msec) : 50=15.86%, 100=73.02%, 250=11.13% 00:20:30.194 cpu : usr=33.35%, sys=1.52%, ctx=1092, majf=0, minf=0 00:20:30.194 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:30.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.194 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.194 filename0: (groupid=0, jobs=1): err= 0: pid=86532: Fri Jul 26 10:25:41 2024 00:20:30.194 read: IOPS=230, BW=922KiB/s (944kB/s)(9224KiB/10002msec) 00:20:30.194 slat (usec): min=4, max=8049, avg=27.94, stdev=289.18 00:20:30.194 clat (usec): min=1267, max=147477, avg=69289.44, stdev=21750.20 00:20:30.194 lat (usec): min=1274, max=147490, avg=69317.38, stdev=21751.13 00:20:30.194 clat percentiles (msec): 00:20:30.194 | 1.00th=[ 4], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 48], 00:20:30.194 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:20:30.194 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 108], 00:20:30.194 | 99.00th=[ 113], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 148], 00:20:30.194 | 99.99th=[ 148] 00:20:30.194 bw ( KiB/s): min= 720, max= 1072, per=4.33%, avg=898.32, stdev=116.95, samples=19 00:20:30.194 iops : min= 180, max= 268, avg=224.58, stdev=29.24, samples=19 00:20:30.194 lat (msec) : 2=0.56%, 4=0.65%, 10=0.30%, 20=0.26%, 50=21.03% 00:20:30.194 lat (msec) : 100=67.56%, 250=9.63% 00:20:30.194 cpu : usr=33.28%, sys=1.63%, ctx=963, majf=0, minf=9 00:20:30.194 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:30.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.194 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.194 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.194 filename0: (groupid=0, jobs=1): err= 0: pid=86533: Fri Jul 26 10:25:41 2024 00:20:30.194 read: IOPS=215, BW=862KiB/s (882kB/s)(8656KiB/10044msec) 00:20:30.194 slat (usec): min=3, max=8024, avg=22.65, stdev=210.95 00:20:30.194 clat (msec): min=19, max=143, avg=74.13, stdev=21.62 00:20:30.194 lat (msec): min=19, max=144, avg=74.15, stdev=21.61 00:20:30.194 clat percentiles (msec): 00:20:30.194 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 56], 00:20:30.194 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:30.194 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:20:30.194 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 144], 00:20:30.194 | 99.99th=[ 144] 00:20:30.194 bw ( KiB/s): min= 640, max= 1274, per=4.14%, avg=859.20, stdev=169.42, samples=20 00:20:30.194 iops : min= 160, max= 318, avg=214.75, stdev=42.32, samples=20 00:20:30.194 lat (msec) : 20=0.74%, 50=17.01%, 100=68.99%, 250=13.26% 00:20:30.194 cpu : usr=34.86%, sys=1.76%, ctx=985, majf=0, minf=9 00:20:30.194 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.0%, 16=17.0%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename0: (groupid=0, jobs=1): err= 0: pid=86534: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=224, BW=897KiB/s (918kB/s)(8972KiB/10007msec) 00:20:30.195 slat (usec): min=4, max=8048, avg=35.43, stdev=378.18 00:20:30.195 clat (msec): min=16, max=136, avg=71.25, stdev=19.83 00:20:30.195 lat (msec): min=16, max=136, avg=71.29, stdev=19.83 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:30.195 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:20:30.195 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 108], 00:20:30.195 | 99.00th=[ 112], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 136], 00:20:30.195 | 99.99th=[ 138] 00:20:30.195 bw ( KiB/s): min= 712, max= 1096, per=4.27%, avg=888.00, stdev=121.65, samples=19 00:20:30.195 iops : min= 178, max= 274, avg=222.00, stdev=30.41, samples=19 00:20:30.195 lat (msec) : 20=0.13%, 50=19.26%, 100=70.40%, 250=10.21% 00:20:30.195 cpu : usr=31.68%, sys=1.48%, ctx=939, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename0: (groupid=0, jobs=1): err= 0: pid=86535: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=230, BW=923KiB/s (945kB/s)(9232KiB/10004msec) 00:20:30.195 slat (usec): min=4, max=10050, avg=21.52, stdev=209.02 00:20:30.195 clat (msec): min=3, max=130, avg=69.25, stdev=20.91 00:20:30.195 lat (msec): min=3, max=130, avg=69.27, stdev=20.90 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:20:30.195 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:20:30.195 | 70.00th=[ 78], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 108], 00:20:30.195 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 125], 00:20:30.195 | 99.99th=[ 131] 00:20:30.195 bw ( KiB/s): min= 688, max= 1080, per=4.37%, avg=907.00, stdev=129.08, samples=19 00:20:30.195 iops : min= 172, max= 270, avg=226.74, stdev=32.28, samples=19 00:20:30.195 lat (msec) : 4=0.30%, 10=0.13%, 20=0.39%, 50=21.53%, 100=68.37% 00:20:30.195 lat (msec) : 250=9.27% 00:20:30.195 cpu : usr=38.75%, sys=1.76%, ctx=1231, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename0: (groupid=0, jobs=1): err= 0: pid=86536: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=227, BW=910KiB/s (932kB/s)(9112KiB/10011msec) 00:20:30.195 slat (usec): min=5, max=8023, avg=21.72, stdev=167.99 00:20:30.195 clat (msec): min=16, max=122, avg=70.20, stdev=19.84 00:20:30.195 lat (msec): min=16, max=122, avg=70.23, stdev=19.84 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 51], 00:20:30.195 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:20:30.195 | 70.00th=[ 78], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 107], 00:20:30.195 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 124], 00:20:30.195 | 99.99th=[ 124] 00:20:30.195 bw ( KiB/s): min= 712, max= 1096, per=4.37%, avg=907.60, stdev=128.43, samples=20 00:20:30.195 iops : min= 178, max= 274, avg=226.90, stdev=32.11, samples=20 00:20:30.195 lat (msec) : 20=0.26%, 50=18.74%, 100=70.94%, 250=10.05% 00:20:30.195 cpu : usr=45.50%, sys=2.09%, ctx=1338, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename1: (groupid=0, jobs=1): err= 0: pid=86537: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=209, BW=840KiB/s (860kB/s)(8412KiB/10015msec) 00:20:30.195 slat (usec): min=3, max=3689, avg=19.15, stdev=80.55 00:20:30.195 clat (msec): min=32, max=144, avg=76.07, stdev=24.57 00:20:30.195 lat (msec): min=32, max=144, avg=76.09, stdev=24.57 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:30.195 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:30.195 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 130], 00:20:30.195 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:20:30.195 | 99.99th=[ 144] 00:20:30.195 bw ( KiB/s): min= 512, max= 1096, per=4.03%, avg=837.20, stdev=188.27, samples=20 00:20:30.195 iops : min= 128, max= 274, avg=209.30, stdev=47.07, samples=20 00:20:30.195 lat (msec) : 50=17.50%, 100=65.43%, 250=17.07% 00:20:30.195 cpu : usr=37.53%, sys=1.85%, ctx=1124, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename1: (groupid=0, jobs=1): err= 0: pid=86538: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=220, BW=883KiB/s (904kB/s)(8852KiB/10027msec) 00:20:30.195 slat (usec): min=5, max=8025, avg=24.88, stdev=240.80 00:20:30.195 clat (msec): min=32, max=132, avg=72.36, stdev=19.89 00:20:30.195 lat (msec): min=32, max=132, avg=72.39, stdev=19.90 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:20:30.195 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:30.195 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 108], 00:20:30.195 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 132], 00:20:30.195 | 99.99th=[ 132] 00:20:30.195 bw ( KiB/s): min= 688, max= 1056, per=4.23%, avg=878.40, stdev=117.99, samples=20 00:20:30.195 iops : min= 172, max= 264, avg=219.55, stdev=29.53, samples=20 00:20:30.195 lat (msec) : 50=18.48%, 100=70.94%, 250=10.57% 00:20:30.195 cpu : usr=31.50%, sys=1.68%, ctx=944, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename1: (groupid=0, jobs=1): err= 0: pid=86539: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=217, BW=869KiB/s (889kB/s)(8724KiB/10044msec) 00:20:30.195 slat (usec): min=4, max=8024, avg=24.90, stdev=257.29 00:20:30.195 clat (msec): min=6, max=143, avg=73.56, stdev=20.62 00:20:30.195 lat (msec): min=6, max=143, avg=73.58, stdev=20.62 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:20:30.195 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:20:30.195 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 108], 00:20:30.195 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 136], 00:20:30.195 | 99.99th=[ 144] 00:20:30.195 bw ( KiB/s): min= 616, max= 1320, per=4.17%, avg=865.90, stdev=151.92, samples=20 00:20:30.195 iops : min= 154, max= 330, avg=216.45, stdev=38.00, samples=20 00:20:30.195 lat (msec) : 10=0.73%, 20=0.73%, 50=10.91%, 100=77.30%, 250=10.32% 00:20:30.195 cpu : usr=37.16%, sys=1.79%, ctx=1127, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename1: (groupid=0, jobs=1): err= 0: pid=86540: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=206, BW=828KiB/s (847kB/s)(8304KiB/10034msec) 00:20:30.195 slat (usec): min=5, max=8037, avg=20.75, stdev=176.23 00:20:30.195 clat (msec): min=32, max=141, avg=77.19, stdev=23.80 00:20:30.195 lat (msec): min=32, max=141, avg=77.21, stdev=23.80 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:30.195 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:30.195 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 116], 00:20:30.195 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 142], 00:20:30.195 | 99.99th=[ 142] 00:20:30.195 bw ( KiB/s): min= 510, max= 1024, per=3.97%, avg=823.60, stdev=179.68, samples=20 00:20:30.195 iops : min= 127, max= 256, avg=205.85, stdev=44.96, samples=20 00:20:30.195 lat (msec) : 50=15.03%, 100=64.31%, 250=20.66% 00:20:30.195 cpu : usr=36.61%, sys=1.99%, ctx=1022, majf=0, minf=9 00:20:30.195 IO depths : 1=0.1%, 2=1.8%, 4=7.4%, 8=75.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:30.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.195 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.195 filename1: (groupid=0, jobs=1): err= 0: pid=86541: Fri Jul 26 10:25:41 2024 00:20:30.195 read: IOPS=209, BW=839KiB/s (859kB/s)(8444KiB/10062msec) 00:20:30.195 slat (usec): min=4, max=2036, avg=15.79, stdev=44.71 00:20:30.195 clat (msec): min=4, max=146, avg=76.14, stdev=26.15 00:20:30.195 lat (msec): min=4, max=146, avg=76.16, stdev=26.15 00:20:30.195 clat percentiles (msec): 00:20:30.195 | 1.00th=[ 5], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 57], 00:20:30.196 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 83], 00:20:30.196 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 122], 00:20:30.196 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:20:30.196 | 99.99th=[ 146] 00:20:30.196 bw ( KiB/s): min= 528, max= 1408, per=4.03%, avg=837.60, stdev=223.07, samples=20 00:20:30.196 iops : min= 132, max= 352, avg=209.35, stdev=55.76, samples=20 00:20:30.196 lat (msec) : 10=1.52%, 20=1.52%, 50=12.41%, 100=65.85%, 250=18.71% 00:20:30.196 cpu : usr=41.89%, sys=2.18%, ctx=1451, majf=0, minf=0 00:20:30.196 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=72.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename1: (groupid=0, jobs=1): err= 0: pid=86542: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=219, BW=876KiB/s (897kB/s)(8784KiB/10026msec) 00:20:30.196 slat (usec): min=4, max=8028, avg=25.40, stdev=256.37 00:20:30.196 clat (msec): min=31, max=132, avg=72.94, stdev=19.45 00:20:30.196 lat (msec): min=31, max=132, avg=72.97, stdev=19.45 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:30.196 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 73], 00:20:30.196 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 107], 00:20:30.196 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 122], 99.95th=[ 128], 00:20:30.196 | 99.99th=[ 133] 00:20:30.196 bw ( KiB/s): min= 689, max= 1032, per=4.20%, avg=871.75, stdev=106.90, samples=20 00:20:30.196 iops : min= 172, max= 258, avg=217.90, stdev=26.75, samples=20 00:20:30.196 lat (msec) : 50=15.66%, 100=71.22%, 250=13.11% 00:20:30.196 cpu : usr=38.56%, sys=1.67%, ctx=1299, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename1: (groupid=0, jobs=1): err= 0: pid=86543: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=216, BW=867KiB/s (888kB/s)(8692KiB/10023msec) 00:20:30.196 slat (usec): min=3, max=8051, avg=27.08, stdev=254.74 00:20:30.196 clat (msec): min=34, max=143, avg=73.65, stdev=20.06 00:20:30.196 lat (msec): min=34, max=144, avg=73.67, stdev=20.05 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 55], 00:20:30.196 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:30.196 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:30.196 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 132], 99.95th=[ 132], 00:20:30.196 | 99.99th=[ 144] 00:20:30.196 bw ( KiB/s): min= 656, max= 1080, per=4.15%, avg=862.60, stdev=138.67, samples=20 00:20:30.196 iops : min= 164, max= 270, avg=215.60, stdev=34.69, samples=20 00:20:30.196 lat (msec) : 50=15.55%, 100=73.49%, 250=10.95% 00:20:30.196 cpu : usr=39.09%, sys=1.99%, ctx=1199, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename1: (groupid=0, jobs=1): err= 0: pid=86544: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=218, BW=873KiB/s (894kB/s)(8764KiB/10041msec) 00:20:30.196 slat (usec): min=4, max=8031, avg=23.66, stdev=242.10 00:20:30.196 clat (msec): min=12, max=132, avg=73.16, stdev=19.91 00:20:30.196 lat (msec): min=12, max=132, avg=73.18, stdev=19.91 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:20:30.196 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:30.196 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 108], 00:20:30.196 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 133], 00:20:30.196 | 99.99th=[ 133] 00:20:30.196 bw ( KiB/s): min= 696, max= 1234, per=4.19%, avg=870.00, stdev=136.07, samples=20 00:20:30.196 iops : min= 174, max= 308, avg=217.45, stdev=33.97, samples=20 00:20:30.196 lat (msec) : 20=0.73%, 50=13.24%, 100=74.03%, 250=12.00% 00:20:30.196 cpu : usr=34.29%, sys=1.73%, ctx=999, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename2: (groupid=0, jobs=1): err= 0: pid=86545: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=200, BW=803KiB/s (822kB/s)(8036KiB/10011msec) 00:20:30.196 slat (usec): min=4, max=8025, avg=26.13, stdev=256.63 00:20:30.196 clat (msec): min=17, max=143, avg=79.56, stdev=25.30 00:20:30.196 lat (msec): min=17, max=143, avg=79.58, stdev=25.32 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:20:30.196 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:20:30.196 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 128], 00:20:30.196 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:20:30.196 | 99.99th=[ 144] 00:20:30.196 bw ( KiB/s): min= 512, max= 1064, per=3.85%, avg=799.60, stdev=190.94, samples=20 00:20:30.196 iops : min= 128, max= 266, avg=199.90, stdev=47.73, samples=20 00:20:30.196 lat (msec) : 20=0.35%, 50=13.94%, 100=63.12%, 250=22.60% 00:20:30.196 cpu : usr=33.66%, sys=1.50%, ctx=1080, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=2.8%, 4=11.4%, 8=71.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=90.4%, 8=7.1%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename2: (groupid=0, jobs=1): err= 0: pid=86546: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=222, BW=889KiB/s (911kB/s)(8920KiB/10030msec) 00:20:30.196 slat (usec): min=4, max=8033, avg=32.46, stdev=349.86 00:20:30.196 clat (msec): min=27, max=138, avg=71.80, stdev=19.85 00:20:30.196 lat (msec): min=27, max=138, avg=71.84, stdev=19.85 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 55], 00:20:30.196 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:20:30.196 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 100], 95.00th=[ 107], 00:20:30.196 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 131], 99.95th=[ 133], 00:20:30.196 | 99.99th=[ 140] 00:20:30.196 bw ( KiB/s): min= 664, max= 1240, per=4.26%, avg=885.45, stdev=141.28, samples=20 00:20:30.196 iops : min= 166, max= 310, avg=221.35, stdev=35.31, samples=20 00:20:30.196 lat (msec) : 50=17.09%, 100=73.54%, 250=9.37% 00:20:30.196 cpu : usr=40.27%, sys=2.00%, ctx=1302, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename2: (groupid=0, jobs=1): err= 0: pid=86547: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=208, BW=833KiB/s (853kB/s)(8360KiB/10034msec) 00:20:30.196 slat (usec): min=4, max=8032, avg=37.95, stdev=393.44 00:20:30.196 clat (msec): min=32, max=143, avg=76.65, stdev=20.55 00:20:30.196 lat (msec): min=32, max=143, avg=76.69, stdev=20.54 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:30.196 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:20:30.196 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:20:30.196 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 144], 00:20:30.196 | 99.99th=[ 144] 00:20:30.196 bw ( KiB/s): min= 528, max= 1016, per=3.99%, avg=829.20, stdev=143.62, samples=20 00:20:30.196 iops : min= 132, max= 254, avg=207.25, stdev=35.91, samples=20 00:20:30.196 lat (msec) : 50=11.15%, 100=73.73%, 250=15.12% 00:20:30.196 cpu : usr=31.62%, sys=1.55%, ctx=933, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=78.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.196 filename2: (groupid=0, jobs=1): err= 0: pid=86548: Fri Jul 26 10:25:41 2024 00:20:30.196 read: IOPS=217, BW=868KiB/s (889kB/s)(8720KiB/10043msec) 00:20:30.196 slat (usec): min=3, max=8024, avg=18.65, stdev=171.68 00:20:30.196 clat (msec): min=14, max=143, avg=73.60, stdev=20.44 00:20:30.196 lat (msec): min=14, max=143, avg=73.62, stdev=20.44 00:20:30.196 clat percentiles (msec): 00:20:30.196 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:20:30.196 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:30.196 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:30.196 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 123], 99.95th=[ 131], 00:20:30.196 | 99.99th=[ 144] 00:20:30.196 bw ( KiB/s): min= 640, max= 1320, per=4.17%, avg=865.50, stdev=155.06, samples=20 00:20:30.196 iops : min= 160, max= 330, avg=216.35, stdev=38.79, samples=20 00:20:30.196 lat (msec) : 20=0.73%, 50=13.49%, 100=73.30%, 250=12.48% 00:20:30.196 cpu : usr=38.26%, sys=1.85%, ctx=1070, majf=0, minf=9 00:20:30.196 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:20:30.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.196 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.197 filename2: (groupid=0, jobs=1): err= 0: pid=86549: Fri Jul 26 10:25:41 2024 00:20:30.197 read: IOPS=195, BW=782KiB/s (801kB/s)(7844KiB/10034msec) 00:20:30.197 slat (usec): min=3, max=8055, avg=25.89, stdev=229.09 00:20:30.197 clat (msec): min=12, max=152, avg=81.71, stdev=23.82 00:20:30.197 lat (msec): min=12, max=152, avg=81.73, stdev=23.82 00:20:30.197 clat percentiles (msec): 00:20:30.197 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 49], 20.00th=[ 63], 00:20:30.197 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 90], 00:20:30.197 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 121], 00:20:30.197 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:20:30.197 | 99.99th=[ 153] 00:20:30.197 bw ( KiB/s): min= 510, max= 1040, per=3.74%, avg=777.60, stdev=176.14, samples=20 00:20:30.197 iops : min= 127, max= 260, avg=194.35, stdev=44.06, samples=20 00:20:30.197 lat (msec) : 20=0.71%, 50=10.05%, 100=63.08%, 250=26.16% 00:20:30.197 cpu : usr=43.06%, sys=2.36%, ctx=1481, majf=0, minf=9 00:20:30.197 IO depths : 1=0.1%, 2=3.5%, 4=14.5%, 8=67.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:30.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 complete : 0=0.0%, 4=91.5%, 8=5.3%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.197 filename2: (groupid=0, jobs=1): err= 0: pid=86550: Fri Jul 26 10:25:41 2024 00:20:30.197 read: IOPS=219, BW=880KiB/s (901kB/s)(8812KiB/10016msec) 00:20:30.197 slat (usec): min=3, max=7054, avg=26.76, stdev=243.72 00:20:30.197 clat (msec): min=15, max=136, avg=72.60, stdev=21.22 00:20:30.197 lat (msec): min=15, max=136, avg=72.62, stdev=21.23 00:20:30.197 clat percentiles (msec): 00:20:30.197 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:20:30.197 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:20:30.197 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 108], 00:20:30.197 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 138], 00:20:30.197 | 99.99th=[ 138] 00:20:30.197 bw ( KiB/s): min= 640, max= 1072, per=4.21%, avg=874.85, stdev=143.05, samples=20 00:20:30.197 iops : min= 160, max= 268, avg=218.70, stdev=35.76, samples=20 00:20:30.197 lat (msec) : 20=0.27%, 50=16.52%, 100=70.90%, 250=12.30% 00:20:30.197 cpu : usr=38.37%, sys=2.10%, ctx=1172, majf=0, minf=9 00:20:30.197 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:30.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.197 filename2: (groupid=0, jobs=1): err= 0: pid=86551: Fri Jul 26 10:25:41 2024 00:20:30.197 read: IOPS=223, BW=894KiB/s (915kB/s)(8948KiB/10013msec) 00:20:30.197 slat (usec): min=4, max=8052, avg=35.44, stdev=359.38 00:20:30.197 clat (msec): min=15, max=131, avg=71.45, stdev=20.10 00:20:30.197 lat (msec): min=15, max=131, avg=71.48, stdev=20.09 00:20:30.197 clat percentiles (msec): 00:20:30.197 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 50], 00:20:30.197 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:20:30.197 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 107], 00:20:30.197 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 130], 00:20:30.197 | 99.99th=[ 132] 00:20:30.197 bw ( KiB/s): min= 712, max= 1080, per=4.29%, avg=891.20, stdev=125.70, samples=20 00:20:30.197 iops : min= 178, max= 270, avg=222.80, stdev=31.43, samples=20 00:20:30.197 lat (msec) : 20=0.27%, 50=19.89%, 100=69.38%, 250=10.46% 00:20:30.197 cpu : usr=37.23%, sys=1.85%, ctx=1214, majf=0, minf=9 00:20:30.197 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:30.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.197 filename2: (groupid=0, jobs=1): err= 0: pid=86552: Fri Jul 26 10:25:41 2024 00:20:30.197 read: IOPS=215, BW=863KiB/s (884kB/s)(8648KiB/10017msec) 00:20:30.197 slat (usec): min=4, max=4036, avg=20.18, stdev=122.31 00:20:30.197 clat (msec): min=31, max=143, avg=74.02, stdev=21.72 00:20:30.197 lat (msec): min=31, max=144, avg=74.04, stdev=21.71 00:20:30.197 clat percentiles (msec): 00:20:30.197 | 1.00th=[ 38], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 56], 00:20:30.197 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:20:30.197 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:20:30.197 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 144], 00:20:30.197 | 99.99th=[ 144] 00:20:30.197 bw ( KiB/s): min= 528, max= 1048, per=4.14%, avg=859.60, stdev=146.47, samples=20 00:20:30.197 iops : min= 132, max= 262, avg=214.90, stdev=36.62, samples=20 00:20:30.197 lat (msec) : 50=16.14%, 100=70.07%, 250=13.78% 00:20:30.197 cpu : usr=38.10%, sys=1.84%, ctx=1197, majf=0, minf=9 00:20:30.197 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:30.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.197 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:30.197 00:20:30.197 Run status group 0 (all jobs): 00:20:30.197 READ: bw=20.3MiB/s (21.3MB/s), 782KiB/s-923KiB/s (801kB/s-945kB/s), io=204MiB (214MB), run=10002-10062msec 00:20:30.197 10:25:41 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:30.197 10:25:41 -- target/dif.sh@43 -- # local sub 00:20:30.197 10:25:41 -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.197 10:25:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:30.197 10:25:41 -- target/dif.sh@36 -- # local sub_id=0 00:20:30.197 10:25:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.197 10:25:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:30.197 10:25:41 -- target/dif.sh@36 -- # local sub_id=1 00:20:30.197 10:25:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.197 10:25:41 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:30.197 10:25:41 -- target/dif.sh@36 -- # local sub_id=2 00:20:30.197 10:25:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # numjobs=2 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # iodepth=8 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # runtime=5 00:20:30.197 10:25:41 -- target/dif.sh@115 -- # files=1 00:20:30.197 10:25:41 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:30.197 10:25:41 -- target/dif.sh@28 -- # local sub 00:20:30.197 10:25:41 -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.197 10:25:41 -- target/dif.sh@31 -- # create_subsystem 0 00:20:30.197 10:25:41 -- target/dif.sh@18 -- # local sub_id=0 00:20:30.197 10:25:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 bdev_null0 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:30.197 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.197 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.197 [2024-07-26 10:25:41.925715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.197 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.197 10:25:41 -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.197 10:25:41 -- target/dif.sh@31 -- # create_subsystem 1 00:20:30.197 10:25:41 -- target/dif.sh@18 -- # local sub_id=1 00:20:30.197 10:25:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:30.198 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.198 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.198 bdev_null1 00:20:30.198 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.198 10:25:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:30.198 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.198 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.198 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.198 10:25:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:30.198 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.198 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.198 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.198 10:25:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.198 10:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.198 10:25:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.198 10:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.198 10:25:41 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:30.198 10:25:41 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:30.198 10:25:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:30.198 10:25:41 -- nvmf/common.sh@520 -- # config=() 00:20:30.198 10:25:41 -- nvmf/common.sh@520 -- # local subsystem config 00:20:30.198 10:25:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.198 10:25:41 -- target/dif.sh@82 -- # gen_fio_conf 00:20:30.198 10:25:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:30.198 10:25:41 -- target/dif.sh@54 -- # local file 00:20:30.198 10:25:41 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.198 10:25:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:30.198 { 00:20:30.198 "params": { 00:20:30.198 "name": "Nvme$subsystem", 00:20:30.198 "trtype": "$TEST_TRANSPORT", 00:20:30.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.198 "adrfam": "ipv4", 00:20:30.198 "trsvcid": "$NVMF_PORT", 00:20:30.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.198 "hdgst": ${hdgst:-false}, 00:20:30.198 "ddgst": ${ddgst:-false} 00:20:30.198 }, 00:20:30.198 "method": "bdev_nvme_attach_controller" 00:20:30.198 } 00:20:30.198 EOF 00:20:30.198 )") 00:20:30.198 10:25:41 -- target/dif.sh@56 -- # cat 00:20:30.198 10:25:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:30.198 10:25:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.198 10:25:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:30.198 10:25:41 -- nvmf/common.sh@542 -- # cat 00:20:30.198 10:25:41 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.198 10:25:41 -- common/autotest_common.sh@1320 -- # shift 00:20:30.198 10:25:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:30.198 10:25:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.198 10:25:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:30.198 10:25:41 -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.198 10:25:41 -- target/dif.sh@73 -- # cat 00:20:30.198 10:25:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.198 10:25:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:30.198 10:25:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:30.198 10:25:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:30.198 { 00:20:30.198 "params": { 00:20:30.198 "name": "Nvme$subsystem", 00:20:30.198 "trtype": "$TEST_TRANSPORT", 00:20:30.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.198 "adrfam": "ipv4", 00:20:30.198 "trsvcid": "$NVMF_PORT", 00:20:30.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.198 "hdgst": ${hdgst:-false}, 00:20:30.198 "ddgst": ${ddgst:-false} 00:20:30.198 }, 00:20:30.198 "method": "bdev_nvme_attach_controller" 00:20:30.198 } 00:20:30.198 EOF 00:20:30.198 )") 00:20:30.198 10:25:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:30.198 10:25:41 -- target/dif.sh@72 -- # (( file++ )) 00:20:30.198 10:25:41 -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.198 10:25:41 -- nvmf/common.sh@542 -- # cat 00:20:30.198 10:25:41 -- nvmf/common.sh@544 -- # jq . 00:20:30.198 10:25:41 -- nvmf/common.sh@545 -- # IFS=, 00:20:30.198 10:25:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:30.198 "params": { 00:20:30.198 "name": "Nvme0", 00:20:30.198 "trtype": "tcp", 00:20:30.198 "traddr": "10.0.0.2", 00:20:30.198 "adrfam": "ipv4", 00:20:30.198 "trsvcid": "4420", 00:20:30.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.198 "hdgst": false, 00:20:30.198 "ddgst": false 00:20:30.198 }, 00:20:30.198 "method": "bdev_nvme_attach_controller" 00:20:30.198 },{ 00:20:30.198 "params": { 00:20:30.198 "name": "Nvme1", 00:20:30.198 "trtype": "tcp", 00:20:30.198 "traddr": "10.0.0.2", 00:20:30.198 "adrfam": "ipv4", 00:20:30.198 "trsvcid": "4420", 00:20:30.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.198 "hdgst": false, 00:20:30.198 "ddgst": false 00:20:30.198 }, 00:20:30.198 "method": "bdev_nvme_attach_controller" 00:20:30.198 }' 00:20:30.198 10:25:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:30.198 10:25:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:30.198 10:25:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.198 10:25:42 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.198 10:25:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:30.198 10:25:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:30.198 10:25:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:30.198 10:25:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:30.198 10:25:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.198 10:25:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.198 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:30.198 ... 00:20:30.198 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:30.198 ... 00:20:30.198 fio-3.35 00:20:30.198 Starting 4 threads 00:20:30.198 [2024-07-26 10:25:42.601775] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:30.198 [2024-07-26 10:25:42.601843] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:34.384 00:20:34.384 filename0: (groupid=0, jobs=1): err= 0: pid=86701: Fri Jul 26 10:25:47 2024 00:20:34.384 read: IOPS=2165, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5001msec) 00:20:34.384 slat (usec): min=6, max=144, avg=25.04, stdev=13.71 00:20:34.384 clat (usec): min=1043, max=6706, avg=3586.71, stdev=677.46 00:20:34.384 lat (usec): min=1054, max=6735, avg=3611.74, stdev=679.36 00:20:34.384 clat percentiles (usec): 00:20:34.384 | 1.00th=[ 1860], 5.00th=[ 2147], 10.00th=[ 2409], 20.00th=[ 3294], 00:20:34.384 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:20:34.384 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4490], 00:20:34.384 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5669], 99.95th=[ 5866], 00:20:34.384 | 99.99th=[ 6456] 00:20:34.384 bw ( KiB/s): min=16192, max=18256, per=24.88%, avg=17045.33, stdev=665.49, samples=9 00:20:34.384 iops : min= 2024, max= 2282, avg=2130.67, stdev=83.19, samples=9 00:20:34.384 lat (msec) : 2=2.72%, 4=72.90%, 10=24.37% 00:20:34.384 cpu : usr=93.90%, sys=5.30%, ctx=5, majf=0, minf=0 00:20:34.384 IO depths : 1=6.5%, 2=16.1%, 4=55.3%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.384 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.384 issued rwts: total=10832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.384 filename0: (groupid=0, jobs=1): err= 0: pid=86702: Fri Jul 26 10:25:47 2024 00:20:34.384 read: IOPS=2165, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5001msec) 00:20:34.384 slat (usec): min=6, max=155, avg=23.56, stdev=14.24 00:20:34.384 clat (usec): min=273, max=7477, avg=3593.59, stdev=733.62 00:20:34.384 lat (usec): min=285, max=7509, avg=3617.15, stdev=735.96 00:20:34.384 clat percentiles (usec): 00:20:34.384 | 1.00th=[ 1123], 5.00th=[ 2114], 10.00th=[ 2442], 20.00th=[ 3326], 00:20:34.384 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:20:34.384 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4555], 00:20:34.384 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 6521], 99.95th=[ 7308], 00:20:34.384 | 99.99th=[ 7308] 00:20:34.384 bw ( KiB/s): min=16240, max=18640, per=24.90%, avg=17056.00, stdev=811.20, samples=9 00:20:34.384 iops : min= 2030, max= 2330, avg=2132.00, stdev=101.40, samples=9 00:20:34.384 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.27% 00:20:34.384 lat (msec) : 2=3.23%, 4=70.82%, 10=25.60% 00:20:34.384 cpu : usr=93.72%, sys=5.24%, ctx=25, majf=0, minf=9 00:20:34.384 IO depths : 1=6.2%, 2=16.1%, 4=55.2%, 8=22.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.384 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.384 issued rwts: total=10830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.384 filename1: (groupid=0, jobs=1): err= 0: pid=86703: Fri Jul 26 10:25:47 2024 00:20:34.384 read: IOPS=2186, BW=17.1MiB/s (17.9MB/s)(85.4MiB/5002msec) 00:20:34.385 slat (usec): min=6, max=144, avg=24.32, stdev=13.63 00:20:34.385 clat (usec): min=294, max=7341, avg=3556.49, stdev=750.75 00:20:34.385 lat (usec): min=307, max=7355, avg=3580.81, stdev=753.15 00:20:34.385 clat percentiles (usec): 00:20:34.385 | 1.00th=[ 1106], 5.00th=[ 2057], 10.00th=[ 2409], 20.00th=[ 3261], 00:20:34.385 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:20:34.385 | 70.00th=[ 3884], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4490], 00:20:34.385 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5866], 99.95th=[ 5997], 00:20:34.385 | 99.99th=[ 6521] 00:20:34.385 bw ( KiB/s): min=15600, max=20176, per=25.14%, avg=17226.78, stdev=1288.24, samples=9 00:20:34.385 iops : min= 1950, max= 2522, avg=2153.33, stdev=161.03, samples=9 00:20:34.385 lat (usec) : 500=0.04%, 750=0.10%, 1000=0.19% 00:20:34.385 lat (msec) : 2=3.97%, 4=71.18%, 10=24.53% 00:20:34.385 cpu : usr=94.32%, sys=4.86%, ctx=11, majf=0, minf=0 00:20:34.385 IO depths : 1=6.2%, 2=15.7%, 4=55.4%, 8=22.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.385 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.385 issued rwts: total=10935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.385 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.385 filename1: (groupid=0, jobs=1): err= 0: pid=86704: Fri Jul 26 10:25:47 2024 00:20:34.385 read: IOPS=2047, BW=16.0MiB/s (16.8MB/s)(80.0MiB/5001msec) 00:20:34.385 slat (usec): min=6, max=111, avg=23.02, stdev=10.75 00:20:34.385 clat (usec): min=1239, max=7342, avg=3813.69, stdev=632.46 00:20:34.385 lat (usec): min=1260, max=7369, avg=3836.71, stdev=631.35 00:20:34.385 clat percentiles (usec): 00:20:34.385 | 1.00th=[ 1975], 5.00th=[ 2376], 10.00th=[ 3228], 20.00th=[ 3556], 00:20:34.385 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3916], 00:20:34.385 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4817], 00:20:34.385 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 6128], 00:20:34.385 | 99.99th=[ 6587] 00:20:34.385 bw ( KiB/s): min=15776, max=17920, per=24.30%, avg=16650.67, stdev=628.39, samples=9 00:20:34.385 iops : min= 1972, max= 2240, avg=2081.33, stdev=78.55, samples=9 00:20:34.385 lat (msec) : 2=1.27%, 4=65.20%, 10=33.53% 00:20:34.385 cpu : usr=94.70%, sys=4.46%, ctx=10, majf=0, minf=0 00:20:34.385 IO depths : 1=7.1%, 2=19.2%, 4=53.5%, 8=20.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.385 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.385 issued rwts: total=10239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.385 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.385 00:20:34.385 Run status group 0 (all jobs): 00:20:34.385 READ: bw=66.9MiB/s (70.2MB/s), 16.0MiB/s-17.1MiB/s (16.8MB/s-17.9MB/s), io=335MiB (351MB), run=5001-5002msec 00:20:34.643 10:25:47 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:34.643 10:25:47 -- target/dif.sh@43 -- # local sub 00:20:34.643 10:25:47 -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.643 10:25:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:34.643 10:25:47 -- target/dif.sh@36 -- # local sub_id=0 00:20:34.643 10:25:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:34.643 10:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.643 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:34.643 10:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.643 10:25:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:34.643 10:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.643 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:34.643 10:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.643 10:25:47 -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.643 10:25:47 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:34.643 10:25:47 -- target/dif.sh@36 -- # local sub_id=1 00:20:34.643 10:25:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.643 10:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.643 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:34.643 10:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.643 10:25:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:34.644 10:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.644 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 ************************************ 00:20:34.644 END TEST fio_dif_rand_params 00:20:34.644 ************************************ 00:20:34.644 10:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.644 00:20:34.644 real 0m23.434s 00:20:34.644 user 2m4.734s 00:20:34.644 sys 0m7.531s 00:20:34.644 10:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.644 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 10:25:48 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:34.644 10:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:34.644 10:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:34.644 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 ************************************ 00:20:34.644 START TEST fio_dif_digest 00:20:34.644 ************************************ 00:20:34.644 10:25:48 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:20:34.644 10:25:48 -- target/dif.sh@123 -- # local NULL_DIF 00:20:34.644 10:25:48 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:34.644 10:25:48 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:34.644 10:25:48 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:34.644 10:25:48 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:34.644 10:25:48 -- target/dif.sh@127 -- # numjobs=3 00:20:34.644 10:25:48 -- target/dif.sh@127 -- # iodepth=3 00:20:34.644 10:25:48 -- target/dif.sh@127 -- # runtime=10 00:20:34.644 10:25:48 -- target/dif.sh@128 -- # hdgst=true 00:20:34.644 10:25:48 -- target/dif.sh@128 -- # ddgst=true 00:20:34.644 10:25:48 -- target/dif.sh@130 -- # create_subsystems 0 00:20:34.644 10:25:48 -- target/dif.sh@28 -- # local sub 00:20:34.644 10:25:48 -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.644 10:25:48 -- target/dif.sh@31 -- # create_subsystem 0 00:20:34.644 10:25:48 -- target/dif.sh@18 -- # local sub_id=0 00:20:34.644 10:25:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:34.644 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.644 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 bdev_null0 00:20:34.644 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.644 10:25:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:34.644 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.644 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.644 10:25:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:34.644 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.644 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.644 10:25:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.644 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.644 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 [2024-07-26 10:25:48.060286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.644 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.644 10:25:48 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:34.644 10:25:48 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:34.644 10:25:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:34.644 10:25:48 -- nvmf/common.sh@520 -- # config=() 00:20:34.644 10:25:48 -- nvmf/common.sh@520 -- # local subsystem config 00:20:34.644 10:25:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:34.644 10:25:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:34.644 { 00:20:34.644 "params": { 00:20:34.644 "name": "Nvme$subsystem", 00:20:34.644 "trtype": "$TEST_TRANSPORT", 00:20:34.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.644 "adrfam": "ipv4", 00:20:34.644 "trsvcid": "$NVMF_PORT", 00:20:34.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.644 "hdgst": ${hdgst:-false}, 00:20:34.644 "ddgst": ${ddgst:-false} 00:20:34.644 }, 00:20:34.644 "method": "bdev_nvme_attach_controller" 00:20:34.644 } 00:20:34.644 EOF 00:20:34.644 )") 00:20:34.644 10:25:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.644 10:25:48 -- target/dif.sh@82 -- # gen_fio_conf 00:20:34.644 10:25:48 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.644 10:25:48 -- target/dif.sh@54 -- # local file 00:20:34.644 10:25:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:34.644 10:25:48 -- target/dif.sh@56 -- # cat 00:20:34.644 10:25:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.644 10:25:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:34.644 10:25:48 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.644 10:25:48 -- nvmf/common.sh@542 -- # cat 00:20:34.644 10:25:48 -- common/autotest_common.sh@1320 -- # shift 00:20:34.644 10:25:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:34.644 10:25:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.644 10:25:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.644 10:25:48 -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:34.644 10:25:48 -- nvmf/common.sh@544 -- # jq . 00:20:34.644 10:25:48 -- nvmf/common.sh@545 -- # IFS=, 00:20:34.644 10:25:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:34.644 "params": { 00:20:34.644 "name": "Nvme0", 00:20:34.644 "trtype": "tcp", 00:20:34.644 "traddr": "10.0.0.2", 00:20:34.644 "adrfam": "ipv4", 00:20:34.644 "trsvcid": "4420", 00:20:34.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.644 "hdgst": true, 00:20:34.644 "ddgst": true 00:20:34.644 }, 00:20:34.644 "method": "bdev_nvme_attach_controller" 00:20:34.644 }' 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:34.644 10:25:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:34.644 10:25:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:34.644 10:25:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:34.903 10:25:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:34.903 10:25:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:34.903 10:25:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.903 10:25:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.903 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:34.903 ... 00:20:34.903 fio-3.35 00:20:34.903 Starting 3 threads 00:20:35.469 [2024-07-26 10:25:48.620535] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:35.469 [2024-07-26 10:25:48.620647] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:45.510 00:20:45.510 filename0: (groupid=0, jobs=1): err= 0: pid=86810: Fri Jul 26 10:25:58 2024 00:20:45.510 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10009msec) 00:20:45.510 slat (usec): min=6, max=103, avg=18.39, stdev=10.94 00:20:45.510 clat (usec): min=11301, max=20943, avg=12454.01, stdev=970.71 00:20:45.510 lat (usec): min=11314, max=20957, avg=12472.40, stdev=971.80 00:20:45.510 clat percentiles (usec): 00:20:45.510 | 1.00th=[11469], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:20:45.510 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12387], 00:20:45.510 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:20:45.510 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20841], 99.95th=[20841], 00:20:45.510 | 99.99th=[20841] 00:20:45.510 bw ( KiB/s): min=27648, max=31488, per=33.33%, avg=30720.00, stdev=1280.00, samples=19 00:20:45.510 iops : min= 216, max= 246, avg=240.00, stdev=10.00, samples=19 00:20:45.510 lat (msec) : 20=99.75%, 50=0.25% 00:20:45.510 cpu : usr=93.15%, sys=6.30%, ctx=17, majf=0, minf=9 00:20:45.510 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:45.510 filename0: (groupid=0, jobs=1): err= 0: pid=86811: Fri Jul 26 10:25:58 2024 00:20:45.510 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10008msec) 00:20:45.510 slat (usec): min=6, max=152, avg=15.15, stdev= 9.98 00:20:45.510 clat (usec): min=9484, max=20926, avg=12458.15, stdev=979.64 00:20:45.510 lat (usec): min=9497, max=20941, avg=12473.30, stdev=980.78 00:20:45.510 clat percentiles (usec): 00:20:45.510 | 1.00th=[11469], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:20:45.510 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:45.510 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:20:45.510 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20841], 99.95th=[20841], 00:20:45.510 | 99.99th=[20841] 00:20:45.510 bw ( KiB/s): min=27648, max=32256, per=33.33%, avg=30723.32, stdev=1356.68, samples=19 00:20:45.510 iops : min= 216, max= 252, avg=240.00, stdev=10.58, samples=19 00:20:45.510 lat (msec) : 10=0.12%, 20=99.63%, 50=0.25% 00:20:45.510 cpu : usr=92.71%, sys=6.71%, ctx=10, majf=0, minf=9 00:20:45.510 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:45.510 filename0: (groupid=0, jobs=1): err= 0: pid=86812: Fri Jul 26 10:25:58 2024 00:20:45.510 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10012msec) 00:20:45.510 slat (usec): min=6, max=347, avg=18.05, stdev=13.73 00:20:45.510 clat (usec): min=11301, max=20960, avg=12457.15, stdev=973.03 00:20:45.510 lat (usec): min=11314, max=20972, avg=12475.20, stdev=974.17 00:20:45.510 clat percentiles (usec): 00:20:45.510 | 1.00th=[11469], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:20:45.510 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:45.510 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:20:45.510 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20841], 99.95th=[20841], 00:20:45.510 | 99.99th=[20841] 00:20:45.510 bw ( KiB/s): min=27648, max=31488, per=33.37%, avg=30752.10, stdev=1254.10, samples=20 00:20:45.510 iops : min= 216, max= 246, avg=240.25, stdev= 9.80, samples=20 00:20:45.510 lat (msec) : 20=99.75%, 50=0.25% 00:20:45.510 cpu : usr=92.40%, sys=6.79%, ctx=69, majf=0, minf=9 00:20:45.510 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.510 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:45.510 00:20:45.510 Run status group 0 (all jobs): 00:20:45.510 READ: bw=90.0MiB/s (94.4MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=901MiB (945MB), run=10008-10012msec 00:20:45.769 10:25:58 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:45.769 10:25:58 -- target/dif.sh@43 -- # local sub 00:20:45.769 10:25:58 -- target/dif.sh@45 -- # for sub in "$@" 00:20:45.769 10:25:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:45.769 10:25:58 -- target/dif.sh@36 -- # local sub_id=0 00:20:45.769 10:25:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:45.769 10:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.769 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:20:45.769 10:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.769 10:25:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:45.769 10:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:45.769 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:20:45.769 10:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:45.769 00:20:45.769 real 0m10.958s 00:20:45.769 user 0m28.446s 00:20:45.769 sys 0m2.258s 00:20:45.769 10:25:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.769 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:20:45.769 ************************************ 00:20:45.769 END TEST fio_dif_digest 00:20:45.769 ************************************ 00:20:45.769 10:25:59 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:45.769 10:25:59 -- target/dif.sh@147 -- # nvmftestfini 00:20:45.769 10:25:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:45.769 10:25:59 -- nvmf/common.sh@116 -- # sync 00:20:45.769 10:25:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:45.769 10:25:59 -- nvmf/common.sh@119 -- # set +e 00:20:45.769 10:25:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:45.769 10:25:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:45.769 rmmod nvme_tcp 00:20:45.769 rmmod nvme_fabrics 00:20:45.769 rmmod nvme_keyring 00:20:45.769 10:25:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:45.769 10:25:59 -- nvmf/common.sh@123 -- # set -e 00:20:45.769 10:25:59 -- nvmf/common.sh@124 -- # return 0 00:20:45.770 10:25:59 -- nvmf/common.sh@477 -- # '[' -n 86049 ']' 00:20:45.770 10:25:59 -- nvmf/common.sh@478 -- # killprocess 86049 00:20:45.770 10:25:59 -- common/autotest_common.sh@926 -- # '[' -z 86049 ']' 00:20:45.770 10:25:59 -- common/autotest_common.sh@930 -- # kill -0 86049 00:20:45.770 10:25:59 -- common/autotest_common.sh@931 -- # uname 00:20:45.770 10:25:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.770 10:25:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86049 00:20:45.770 10:25:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:45.770 10:25:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:45.770 killing process with pid 86049 00:20:45.770 10:25:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86049' 00:20:45.770 10:25:59 -- common/autotest_common.sh@945 -- # kill 86049 00:20:45.770 10:25:59 -- common/autotest_common.sh@950 -- # wait 86049 00:20:46.028 10:25:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:46.028 10:25:59 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:46.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.286 Waiting for block devices as requested 00:20:46.554 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.554 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.554 10:25:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:46.554 10:25:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:46.554 10:25:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.554 10:25:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:46.554 10:25:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.554 10:25:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:46.554 10:25:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.554 10:25:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:46.554 00:20:46.554 real 0m59.572s 00:20:46.554 user 3m48.163s 00:20:46.554 sys 0m18.766s 00:20:46.554 10:25:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.554 10:25:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.554 ************************************ 00:20:46.554 END TEST nvmf_dif 00:20:46.554 ************************************ 00:20:46.829 10:26:00 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:46.829 10:26:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:46.829 10:26:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:46.829 10:26:00 -- common/autotest_common.sh@10 -- # set +x 00:20:46.829 ************************************ 00:20:46.829 START TEST nvmf_abort_qd_sizes 00:20:46.829 ************************************ 00:20:46.829 10:26:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:46.829 * Looking for test storage... 00:20:46.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:46.829 10:26:00 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.829 10:26:00 -- nvmf/common.sh@7 -- # uname -s 00:20:46.829 10:26:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.829 10:26:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.829 10:26:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.829 10:26:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.829 10:26:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.829 10:26:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.829 10:26:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.829 10:26:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.829 10:26:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.829 10:26:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 00:20:46.829 10:26:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff23582e-2e24-4796-b69f-f798c3c56909 00:20:46.829 10:26:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.829 10:26:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.829 10:26:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.829 10:26:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.829 10:26:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.829 10:26:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.829 10:26:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.829 10:26:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.829 10:26:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.829 10:26:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.829 10:26:00 -- paths/export.sh@5 -- # export PATH 00:20:46.829 10:26:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.829 10:26:00 -- nvmf/common.sh@46 -- # : 0 00:20:46.829 10:26:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:46.829 10:26:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:46.829 10:26:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:46.829 10:26:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.829 10:26:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.829 10:26:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:46.829 10:26:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:46.829 10:26:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:46.829 10:26:00 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:46.829 10:26:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:46.829 10:26:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.829 10:26:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:46.829 10:26:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:46.829 10:26:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:46.829 10:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.829 10:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:46.829 10:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.829 10:26:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:46.829 10:26:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:46.829 10:26:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.829 10:26:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.829 10:26:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.829 10:26:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:46.829 10:26:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.829 10:26:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.829 10:26:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.829 10:26:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.829 10:26:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.829 10:26:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.829 10:26:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.829 10:26:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.829 10:26:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:46.829 10:26:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:46.829 Cannot find device "nvmf_tgt_br" 00:20:46.829 10:26:00 -- nvmf/common.sh@154 -- # true 00:20:46.829 10:26:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.829 Cannot find device "nvmf_tgt_br2" 00:20:46.829 10:26:00 -- nvmf/common.sh@155 -- # true 00:20:46.829 10:26:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:46.829 10:26:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:46.829 Cannot find device "nvmf_tgt_br" 00:20:46.829 10:26:00 -- nvmf/common.sh@157 -- # true 00:20:46.829 10:26:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:46.829 Cannot find device "nvmf_tgt_br2" 00:20:46.829 10:26:00 -- nvmf/common.sh@158 -- # true 00:20:46.829 10:26:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:46.829 10:26:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:46.830 10:26:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.830 10:26:00 -- nvmf/common.sh@161 -- # true 00:20:46.830 10:26:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.830 10:26:00 -- nvmf/common.sh@162 -- # true 00:20:46.830 10:26:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.830 10:26:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.830 10:26:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:47.088 10:26:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:47.088 10:26:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:47.088 10:26:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:47.088 10:26:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:47.088 10:26:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:47.088 10:26:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:47.088 10:26:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:47.088 10:26:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:47.088 10:26:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:47.088 10:26:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:47.088 10:26:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.088 10:26:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.088 10:26:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.088 10:26:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:47.088 10:26:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:47.088 10:26:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.088 10:26:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:47.088 10:26:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:47.088 10:26:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:47.088 10:26:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:47.088 10:26:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:47.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:47.088 00:20:47.088 --- 10.0.0.2 ping statistics --- 00:20:47.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.088 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:47.088 10:26:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:47.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:47.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:47.088 00:20:47.088 --- 10.0.0.3 ping statistics --- 00:20:47.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.088 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:47.088 10:26:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:47.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:47.088 00:20:47.088 --- 10.0.0.1 ping statistics --- 00:20:47.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.088 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:47.088 10:26:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.088 10:26:00 -- nvmf/common.sh@421 -- # return 0 00:20:47.089 10:26:00 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:47.089 10:26:00 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:47.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.914 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.914 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.914 10:26:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.914 10:26:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:47.914 10:26:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:47.914 10:26:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.914 10:26:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:47.914 10:26:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:47.914 10:26:01 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:47.914 10:26:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.914 10:26:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:47.914 10:26:01 -- common/autotest_common.sh@10 -- # set +x 00:20:47.914 10:26:01 -- nvmf/common.sh@469 -- # nvmfpid=87411 00:20:47.914 10:26:01 -- nvmf/common.sh@470 -- # waitforlisten 87411 00:20:47.914 10:26:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:47.914 10:26:01 -- common/autotest_common.sh@819 -- # '[' -z 87411 ']' 00:20:47.914 10:26:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.914 10:26:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.914 10:26:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.914 10:26:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.914 10:26:01 -- common/autotest_common.sh@10 -- # set +x 00:20:48.172 [2024-07-26 10:26:01.390180] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:48.172 [2024-07-26 10:26:01.390291] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.172 [2024-07-26 10:26:01.533170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.172 [2024-07-26 10:26:01.623994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:48.172 [2024-07-26 10:26:01.624359] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.172 [2024-07-26 10:26:01.624506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.172 [2024-07-26 10:26:01.624653] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.172 [2024-07-26 10:26:01.624838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.172 [2024-07-26 10:26:01.625361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.172 [2024-07-26 10:26:01.625503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.173 [2024-07-26 10:26:01.625637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.107 10:26:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.107 10:26:02 -- common/autotest_common.sh@852 -- # return 0 00:20:49.107 10:26:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:49.107 10:26:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:49.107 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 10:26:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.107 10:26:02 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:49.107 10:26:02 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:49.107 10:26:02 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:49.107 10:26:02 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:49.107 10:26:02 -- scripts/common.sh@312 -- # local nvmes 00:20:49.107 10:26:02 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:49.107 10:26:02 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:49.107 10:26:02 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:49.107 10:26:02 -- scripts/common.sh@297 -- # local bdf= 00:20:49.107 10:26:02 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:49.107 10:26:02 -- scripts/common.sh@232 -- # local class 00:20:49.107 10:26:02 -- scripts/common.sh@233 -- # local subclass 00:20:49.107 10:26:02 -- scripts/common.sh@234 -- # local progif 00:20:49.107 10:26:02 -- scripts/common.sh@235 -- # printf %02x 1 00:20:49.107 10:26:02 -- scripts/common.sh@235 -- # class=01 00:20:49.107 10:26:02 -- scripts/common.sh@236 -- # printf %02x 8 00:20:49.107 10:26:02 -- scripts/common.sh@236 -- # subclass=08 00:20:49.107 10:26:02 -- scripts/common.sh@237 -- # printf %02x 2 00:20:49.107 10:26:02 -- scripts/common.sh@237 -- # progif=02 00:20:49.107 10:26:02 -- scripts/common.sh@239 -- # hash lspci 00:20:49.108 10:26:02 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:49.108 10:26:02 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:49.108 10:26:02 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:49.108 10:26:02 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:49.108 10:26:02 -- scripts/common.sh@244 -- # tr -d '"' 00:20:49.108 10:26:02 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:49.108 10:26:02 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:49.108 10:26:02 -- scripts/common.sh@15 -- # local i 00:20:49.108 10:26:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:49.108 10:26:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:49.108 10:26:02 -- scripts/common.sh@24 -- # return 0 00:20:49.108 10:26:02 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:49.108 10:26:02 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:49.108 10:26:02 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:49.108 10:26:02 -- scripts/common.sh@15 -- # local i 00:20:49.108 10:26:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:49.108 10:26:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:49.108 10:26:02 -- scripts/common.sh@24 -- # return 0 00:20:49.108 10:26:02 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:49.108 10:26:02 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:49.108 10:26:02 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:49.108 10:26:02 -- scripts/common.sh@322 -- # uname -s 00:20:49.108 10:26:02 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:49.108 10:26:02 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:49.108 10:26:02 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:49.108 10:26:02 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:49.108 10:26:02 -- scripts/common.sh@322 -- # uname -s 00:20:49.108 10:26:02 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:49.108 10:26:02 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:49.108 10:26:02 -- scripts/common.sh@327 -- # (( 2 )) 00:20:49.108 10:26:02 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:49.108 10:26:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:49.108 10:26:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 ************************************ 00:20:49.108 START TEST spdk_target_abort 00:20:49.108 ************************************ 00:20:49.108 10:26:02 -- common/autotest_common.sh@1104 -- # spdk_target 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:49.108 10:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 spdk_targetn1 00:20:49.108 10:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.108 10:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 [2024-07-26 10:26:02.517841] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.108 10:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:49.108 10:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 10:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:49.108 10:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 10:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:49.108 10:26:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.108 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:20:49.108 [2024-07-26 10:26:02.550134] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.108 10:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:49.108 10:26:02 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:52.392 Initializing NVMe Controllers 00:20:52.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:52.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:52.392 Initialization complete. Launching workers. 00:20:52.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9605, failed: 0 00:20:52.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1038, failed to submit 8567 00:20:52.392 success 720, unsuccess 318, failed 0 00:20:52.392 10:26:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.392 10:26:05 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:55.677 Initializing NVMe Controllers 00:20:55.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:55.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:55.677 Initialization complete. Launching workers. 00:20:55.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8994, failed: 0 00:20:55.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1181, failed to submit 7813 00:20:55.677 success 384, unsuccess 797, failed 0 00:20:55.677 10:26:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:55.677 10:26:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:58.972 Initializing NVMe Controllers 00:20:58.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:58.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:58.972 Initialization complete. Launching workers. 00:20:58.972 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 28729, failed: 0 00:20:58.972 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2214, failed to submit 26515 00:20:58.972 success 377, unsuccess 1837, failed 0 00:20:58.972 10:26:12 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:58.972 10:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.972 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.972 10:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.972 10:26:12 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:58.972 10:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.972 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 10:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.231 10:26:12 -- target/abort_qd_sizes.sh@62 -- # killprocess 87411 00:20:59.231 10:26:12 -- common/autotest_common.sh@926 -- # '[' -z 87411 ']' 00:20:59.231 10:26:12 -- common/autotest_common.sh@930 -- # kill -0 87411 00:20:59.231 10:26:12 -- common/autotest_common.sh@931 -- # uname 00:20:59.231 10:26:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.231 10:26:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87411 00:20:59.231 killing process with pid 87411 00:20:59.231 10:26:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.231 10:26:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.231 10:26:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87411' 00:20:59.231 10:26:12 -- common/autotest_common.sh@945 -- # kill 87411 00:20:59.231 10:26:12 -- common/autotest_common.sh@950 -- # wait 87411 00:20:59.489 00:20:59.489 ************************************ 00:20:59.489 END TEST spdk_target_abort 00:20:59.489 ************************************ 00:20:59.489 real 0m10.423s 00:20:59.489 user 0m42.319s 00:20:59.489 sys 0m2.079s 00:20:59.489 10:26:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.489 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:59.489 10:26:12 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:59.489 10:26:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:59.489 10:26:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.489 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:59.489 ************************************ 00:20:59.489 START TEST kernel_target_abort 00:20:59.489 ************************************ 00:20:59.489 10:26:12 -- common/autotest_common.sh@1104 -- # kernel_target 00:20:59.489 10:26:12 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:59.489 10:26:12 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:59.489 10:26:12 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:59.489 10:26:12 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:59.489 10:26:12 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:59.489 10:26:12 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:59.489 10:26:12 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:59.489 10:26:12 -- nvmf/common.sh@627 -- # local block nvme 00:20:59.489 10:26:12 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:59.489 10:26:12 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:59.748 10:26:12 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:59.748 10:26:12 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:00.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.005 Waiting for block devices as requested 00:21:00.005 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.005 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.263 10:26:13 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:00.263 10:26:13 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:00.263 10:26:13 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:21:00.263 10:26:13 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:21:00.263 10:26:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:00.263 No valid GPT data, bailing 00:21:00.263 10:26:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:00.263 10:26:13 -- scripts/common.sh@393 -- # pt= 00:21:00.263 10:26:13 -- scripts/common.sh@394 -- # return 1 00:21:00.263 10:26:13 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:21:00.263 10:26:13 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:00.263 10:26:13 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:00.263 10:26:13 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:21:00.263 10:26:13 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:21:00.263 10:26:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:00.263 No valid GPT data, bailing 00:21:00.263 10:26:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:00.263 10:26:13 -- scripts/common.sh@393 -- # pt= 00:21:00.263 10:26:13 -- scripts/common.sh@394 -- # return 1 00:21:00.263 10:26:13 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:21:00.263 10:26:13 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:00.263 10:26:13 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:21:00.263 10:26:13 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:21:00.263 10:26:13 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:21:00.263 10:26:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:21:00.263 No valid GPT data, bailing 00:21:00.263 10:26:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:21:00.522 10:26:13 -- scripts/common.sh@393 -- # pt= 00:21:00.522 10:26:13 -- scripts/common.sh@394 -- # return 1 00:21:00.522 10:26:13 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:21:00.522 10:26:13 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:00.522 10:26:13 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:21:00.522 10:26:13 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:21:00.522 10:26:13 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:21:00.522 10:26:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:21:00.522 No valid GPT data, bailing 00:21:00.522 10:26:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:21:00.522 10:26:13 -- scripts/common.sh@393 -- # pt= 00:21:00.522 10:26:13 -- scripts/common.sh@394 -- # return 1 00:21:00.522 10:26:13 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:21:00.522 10:26:13 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:21:00.522 10:26:13 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:00.522 10:26:13 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:00.522 10:26:13 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:00.522 10:26:13 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:21:00.522 10:26:13 -- nvmf/common.sh@654 -- # echo 1 00:21:00.522 10:26:13 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:21:00.522 10:26:13 -- nvmf/common.sh@656 -- # echo 1 00:21:00.522 10:26:13 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:21:00.522 10:26:13 -- nvmf/common.sh@663 -- # echo tcp 00:21:00.522 10:26:13 -- nvmf/common.sh@664 -- # echo 4420 00:21:00.522 10:26:13 -- nvmf/common.sh@665 -- # echo ipv4 00:21:00.522 10:26:13 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:00.522 10:26:13 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff23582e-2e24-4796-b69f-f798c3c56909 --hostid=ff23582e-2e24-4796-b69f-f798c3c56909 -a 10.0.0.1 -t tcp -s 4420 00:21:00.522 00:21:00.522 Discovery Log Number of Records 2, Generation counter 2 00:21:00.522 =====Discovery Log Entry 0====== 00:21:00.522 trtype: tcp 00:21:00.522 adrfam: ipv4 00:21:00.522 subtype: current discovery subsystem 00:21:00.522 treq: not specified, sq flow control disable supported 00:21:00.522 portid: 1 00:21:00.522 trsvcid: 4420 00:21:00.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:00.522 traddr: 10.0.0.1 00:21:00.522 eflags: none 00:21:00.522 sectype: none 00:21:00.522 =====Discovery Log Entry 1====== 00:21:00.522 trtype: tcp 00:21:00.522 adrfam: ipv4 00:21:00.522 subtype: nvme subsystem 00:21:00.522 treq: not specified, sq flow control disable supported 00:21:00.522 portid: 1 00:21:00.522 trsvcid: 4420 00:21:00.522 subnqn: kernel_target 00:21:00.522 traddr: 10.0.0.1 00:21:00.522 eflags: none 00:21:00.522 sectype: none 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.522 10:26:13 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:03.806 Initializing NVMe Controllers 00:21:03.806 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:03.806 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:03.806 Initialization complete. Launching workers. 00:21:03.806 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 26212, failed: 0 00:21:03.806 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26212, failed to submit 0 00:21:03.806 success 0, unsuccess 26212, failed 0 00:21:03.806 10:26:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.806 10:26:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:07.094 Initializing NVMe Controllers 00:21:07.094 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:07.094 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:07.094 Initialization complete. Launching workers. 00:21:07.094 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 61946, failed: 0 00:21:07.094 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25080, failed to submit 36866 00:21:07.094 success 0, unsuccess 25080, failed 0 00:21:07.094 10:26:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:07.094 10:26:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:10.387 Initializing NVMe Controllers 00:21:10.387 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:10.387 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:10.387 Initialization complete. Launching workers. 00:21:10.387 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 74014, failed: 0 00:21:10.387 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18454, failed to submit 55560 00:21:10.387 success 0, unsuccess 18454, failed 0 00:21:10.387 10:26:23 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:21:10.387 10:26:23 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:21:10.387 10:26:23 -- nvmf/common.sh@677 -- # echo 0 00:21:10.387 10:26:23 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:21:10.387 10:26:23 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:10.387 10:26:23 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:10.387 10:26:23 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:10.387 10:26:23 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:21:10.387 10:26:23 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:21:10.387 ************************************ 00:21:10.387 END TEST kernel_target_abort 00:21:10.387 ************************************ 00:21:10.387 00:21:10.387 real 0m10.504s 00:21:10.387 user 0m5.126s 00:21:10.387 sys 0m2.509s 00:21:10.387 10:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.387 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:21:10.387 10:26:23 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:21:10.387 10:26:23 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:21:10.387 10:26:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:10.387 10:26:23 -- nvmf/common.sh@116 -- # sync 00:21:10.387 10:26:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:10.387 10:26:23 -- nvmf/common.sh@119 -- # set +e 00:21:10.387 10:26:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:10.387 10:26:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:10.387 rmmod nvme_tcp 00:21:10.387 rmmod nvme_fabrics 00:21:10.387 rmmod nvme_keyring 00:21:10.387 10:26:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:10.387 10:26:23 -- nvmf/common.sh@123 -- # set -e 00:21:10.387 10:26:23 -- nvmf/common.sh@124 -- # return 0 00:21:10.387 10:26:23 -- nvmf/common.sh@477 -- # '[' -n 87411 ']' 00:21:10.387 10:26:23 -- nvmf/common.sh@478 -- # killprocess 87411 00:21:10.387 10:26:23 -- common/autotest_common.sh@926 -- # '[' -z 87411 ']' 00:21:10.387 Process with pid 87411 is not found 00:21:10.387 10:26:23 -- common/autotest_common.sh@930 -- # kill -0 87411 00:21:10.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (87411) - No such process 00:21:10.387 10:26:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 87411 is not found' 00:21:10.387 10:26:23 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:10.387 10:26:23 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:10.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:10.954 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:10.954 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:10.954 10:26:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:10.954 10:26:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:10.954 10:26:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.954 10:26:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:10.954 10:26:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.954 10:26:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:10.954 10:26:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.954 10:26:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:10.954 ************************************ 00:21:10.954 END TEST nvmf_abort_qd_sizes 00:21:10.954 ************************************ 00:21:10.954 00:21:10.954 real 0m24.349s 00:21:10.954 user 0m48.779s 00:21:10.954 sys 0m5.879s 00:21:10.954 10:26:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.954 10:26:24 -- common/autotest_common.sh@10 -- # set +x 00:21:11.212 10:26:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:11.212 10:26:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:11.212 10:26:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:11.212 10:26:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:11.212 10:26:24 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:21:11.212 10:26:24 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:21:11.212 10:26:24 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:21:11.212 10:26:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:11.212 10:26:24 -- common/autotest_common.sh@10 -- # set +x 00:21:11.212 10:26:24 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:21:11.212 10:26:24 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:21:11.212 10:26:24 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:21:11.212 10:26:24 -- common/autotest_common.sh@10 -- # set +x 00:21:13.120 INFO: APP EXITING 00:21:13.120 INFO: killing all VMs 00:21:13.120 INFO: killing vhost app 00:21:13.120 INFO: EXIT DONE 00:21:13.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.637 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:13.637 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:14.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.204 Cleaning 00:21:14.204 Removing: /var/run/dpdk/spdk0/config 00:21:14.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:14.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:14.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:14.204 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:14.204 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:14.204 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:14.204 Removing: /var/run/dpdk/spdk1/config 00:21:14.204 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:14.204 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:14.204 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:14.204 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:14.204 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:14.205 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:14.205 Removing: /var/run/dpdk/spdk2/config 00:21:14.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:14.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:14.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:14.205 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:14.205 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:14.205 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:14.205 Removing: /var/run/dpdk/spdk3/config 00:21:14.205 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:14.463 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:14.463 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:14.463 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:14.463 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:14.463 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:14.463 Removing: /var/run/dpdk/spdk4/config 00:21:14.463 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:14.463 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:14.463 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:14.463 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:14.463 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:14.463 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:14.463 Removing: /dev/shm/nvmf_trace.0 00:21:14.463 Removing: /dev/shm/spdk_tgt_trace.pid65437 00:21:14.463 Removing: /var/run/dpdk/spdk0 00:21:14.463 Removing: /var/run/dpdk/spdk1 00:21:14.463 Removing: /var/run/dpdk/spdk2 00:21:14.463 Removing: /var/run/dpdk/spdk3 00:21:14.463 Removing: /var/run/dpdk/spdk4 00:21:14.463 Removing: /var/run/dpdk/spdk_pid65292 00:21:14.463 Removing: /var/run/dpdk/spdk_pid65437 00:21:14.463 Removing: /var/run/dpdk/spdk_pid65673 00:21:14.463 Removing: /var/run/dpdk/spdk_pid65864 00:21:14.463 Removing: /var/run/dpdk/spdk_pid65998 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66067 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66142 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66221 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66297 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66330 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66371 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66426 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66515 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66941 00:21:14.463 Removing: /var/run/dpdk/spdk_pid66993 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67044 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67060 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67140 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67156 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67236 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67252 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67303 00:21:14.463 Removing: /var/run/dpdk/spdk_pid67321 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67361 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67379 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67506 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67536 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67612 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67663 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67688 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67746 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67766 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67800 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67814 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67849 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67868 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67903 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67922 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67959 00:21:14.464 Removing: /var/run/dpdk/spdk_pid67973 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68013 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68027 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68062 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68083 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68112 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68137 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68166 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68191 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68220 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68239 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68274 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68288 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68328 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68342 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68377 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68396 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68431 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68450 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68485 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68499 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68533 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68553 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68586 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68610 00:21:14.464 Removing: /var/run/dpdk/spdk_pid68642 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68670 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68702 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68727 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68756 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68781 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68811 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68880 00:21:14.722 Removing: /var/run/dpdk/spdk_pid68972 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69275 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69292 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69323 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69341 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69355 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69373 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69391 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69404 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69428 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69440 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69454 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69477 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69490 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69509 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69527 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69545 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69553 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69582 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69589 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69608 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69643 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69650 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69683 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69745 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69772 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69781 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69815 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69819 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69832 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69873 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69884 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69916 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69924 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69931 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69943 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69952 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69959 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69967 00:21:14.722 Removing: /var/run/dpdk/spdk_pid69979 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70006 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70033 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70042 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70071 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70086 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70092 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70134 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70145 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70172 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70185 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70188 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70200 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70213 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70215 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70228 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70230 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70303 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70356 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70460 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70496 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70541 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70555 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70575 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70590 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70625 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70645 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70713 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70729 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70772 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70848 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70904 00:21:14.722 Removing: /var/run/dpdk/spdk_pid70937 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71032 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71074 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71105 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71326 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71417 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71446 00:21:14.722 Removing: /var/run/dpdk/spdk_pid71759 00:21:14.981 Removing: /var/run/dpdk/spdk_pid71797 00:21:14.981 Removing: /var/run/dpdk/spdk_pid72102 00:21:14.981 Removing: /var/run/dpdk/spdk_pid72515 00:21:14.981 Removing: /var/run/dpdk/spdk_pid72777 00:21:14.981 Removing: /var/run/dpdk/spdk_pid73541 00:21:14.981 Removing: /var/run/dpdk/spdk_pid74368 00:21:14.981 Removing: /var/run/dpdk/spdk_pid74480 00:21:14.981 Removing: /var/run/dpdk/spdk_pid74552 00:21:14.981 Removing: /var/run/dpdk/spdk_pid75796 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76008 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76317 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76428 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76568 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76590 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76622 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76645 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76742 00:21:14.981 Removing: /var/run/dpdk/spdk_pid76877 00:21:14.981 Removing: /var/run/dpdk/spdk_pid77021 00:21:14.981 Removing: /var/run/dpdk/spdk_pid77102 00:21:14.981 Removing: /var/run/dpdk/spdk_pid77493 00:21:14.981 Removing: /var/run/dpdk/spdk_pid77833 00:21:14.981 Removing: /var/run/dpdk/spdk_pid77839 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80024 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80026 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80304 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80319 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80339 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80364 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80369 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80458 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80464 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80572 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80581 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80689 00:21:14.981 Removing: /var/run/dpdk/spdk_pid80691 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81091 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81134 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81244 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81321 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81635 00:21:14.981 Removing: /var/run/dpdk/spdk_pid81829 00:21:14.981 Removing: /var/run/dpdk/spdk_pid82213 00:21:14.981 Removing: /var/run/dpdk/spdk_pid82747 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83195 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83255 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83310 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83370 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83491 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83546 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83608 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83667 00:21:14.981 Removing: /var/run/dpdk/spdk_pid83984 00:21:14.981 Removing: /var/run/dpdk/spdk_pid85159 00:21:14.981 Removing: /var/run/dpdk/spdk_pid85299 00:21:14.981 Removing: /var/run/dpdk/spdk_pid85547 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86106 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86265 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86426 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86524 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86692 00:21:14.981 Removing: /var/run/dpdk/spdk_pid86805 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87462 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87493 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87528 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87776 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87811 00:21:14.981 Removing: /var/run/dpdk/spdk_pid87841 00:21:14.981 Clean 00:21:15.239 killing process with pid 59625 00:21:15.239 killing process with pid 59626 00:21:15.239 10:26:28 -- common/autotest_common.sh@1436 -- # return 0 00:21:15.239 10:26:28 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:21:15.239 10:26:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:15.239 10:26:28 -- common/autotest_common.sh@10 -- # set +x 00:21:15.239 10:26:28 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:21:15.239 10:26:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:15.239 10:26:28 -- common/autotest_common.sh@10 -- # set +x 00:21:15.239 10:26:28 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:15.239 10:26:28 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:15.239 10:26:28 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:15.239 10:26:28 -- spdk/autotest.sh@394 -- # hash lcov 00:21:15.239 10:26:28 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:15.239 10:26:28 -- spdk/autotest.sh@396 -- # hostname 00:21:15.240 10:26:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:15.498 geninfo: WARNING: invalid characters removed from testname! 00:21:37.423 10:26:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:40.024 10:26:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:42.558 10:26:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:45.111 10:26:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:47.644 10:27:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:49.547 10:27:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:52.125 10:27:05 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:52.125 10:27:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:52.125 10:27:05 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:52.125 10:27:05 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.125 10:27:05 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.125 10:27:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.125 10:27:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.125 10:27:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.125 10:27:05 -- paths/export.sh@5 -- $ export PATH 00:21:52.125 10:27:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.125 10:27:05 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:52.125 10:27:05 -- common/autobuild_common.sh@438 -- $ date +%s 00:21:52.125 10:27:05 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721989625.XXXXXX 00:21:52.125 10:27:05 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721989625.5YrMTF 00:21:52.125 10:27:05 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:21:52.125 10:27:05 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:21:52.125 10:27:05 -- common/autobuild_common.sh@445 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:52.125 10:27:05 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:52.125 10:27:05 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:52.125 10:27:05 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:52.125 10:27:05 -- common/autobuild_common.sh@454 -- $ get_config_params 00:21:52.125 10:27:05 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:21:52.125 10:27:05 -- common/autotest_common.sh@10 -- $ set +x 00:21:52.125 10:27:05 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:52.125 10:27:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:52.125 10:27:05 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:52.125 10:27:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:52.125 10:27:05 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:52.125 10:27:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:52.125 10:27:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:52.125 10:27:05 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:52.125 10:27:05 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:52.125 10:27:05 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:52.125 10:27:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:52.125 + [[ -n 5822 ]] 00:21:52.125 + sudo kill 5822 00:21:52.134 [Pipeline] } 00:21:52.152 [Pipeline] // timeout 00:21:52.158 [Pipeline] } 00:21:52.176 [Pipeline] // stage 00:21:52.181 [Pipeline] } 00:21:52.197 [Pipeline] // catchError 00:21:52.207 [Pipeline] stage 00:21:52.209 [Pipeline] { (Stop VM) 00:21:52.222 [Pipeline] sh 00:21:52.501 + vagrant halt 00:21:55.784 ==> default: Halting domain... 00:22:01.071 [Pipeline] sh 00:22:01.351 + vagrant destroy -f 00:22:04.637 ==> default: Removing domain... 00:22:05.216 [Pipeline] sh 00:22:05.498 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:22:05.507 [Pipeline] } 00:22:05.533 [Pipeline] // stage 00:22:05.541 [Pipeline] } 00:22:05.560 [Pipeline] // dir 00:22:05.566 [Pipeline] } 00:22:05.583 [Pipeline] // wrap 00:22:05.590 [Pipeline] } 00:22:05.605 [Pipeline] // catchError 00:22:05.615 [Pipeline] stage 00:22:05.618 [Pipeline] { (Epilogue) 00:22:05.632 [Pipeline] sh 00:22:05.921 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:12.495 [Pipeline] catchError 00:22:12.497 [Pipeline] { 00:22:12.510 [Pipeline] sh 00:22:12.796 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:12.796 Artifacts sizes are good 00:22:12.804 [Pipeline] } 00:22:12.820 [Pipeline] // catchError 00:22:12.829 [Pipeline] archiveArtifacts 00:22:12.836 Archiving artifacts 00:22:13.013 [Pipeline] cleanWs 00:22:13.040 [WS-CLEANUP] Deleting project workspace... 00:22:13.040 [WS-CLEANUP] Deferred wipeout is used... 00:22:13.051 [WS-CLEANUP] done 00:22:13.053 [Pipeline] } 00:22:13.070 [Pipeline] // stage 00:22:13.075 [Pipeline] } 00:22:13.091 [Pipeline] // node 00:22:13.096 [Pipeline] End of Pipeline 00:22:13.130 Finished: SUCCESS