[00:54:38] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.72) (Quit: Leaving) [01:11:30] *** Joins: Vikas_Aggarwal (73719c02@gateway/web/freenode/ip.115.113.156.2) [01:37:39] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [01:49:12] *** Joins: tomzawadzki (~tomzawadz@134.134.139.74) [02:12:44] Shuhei: it's disabled now, thanks [02:14:10] Shuhei, my bad - I was testing new CI system and I did not enable "Silent mode". Sorry about the fail spam. Silent mode is enabled now so there should be no messages [02:25:28] *** Quits: drv (daniel@oak.drv.nu) (Quit: No Ping reply in 180 seconds.) [02:26:48] *** Joins: drv (daniel@oak.drv.nu) [02:26:48] *** ChanServ sets mode: +o drv [02:42:58] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [03:20:45] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [03:23:33] *** Quits: Vikas_Aggarwal (73719c02@gateway/web/freenode/ip.115.113.156.2) (Quit: Page closed) [03:25:26] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [06:12:19] *** Joins: darsto_ (~dstojacx@192.55.54.41) [06:13:34] *** Quits: darsto (~dstojacx@192.55.54.41) (Ping timeout: 264 seconds) [06:13:34] *** darsto_ is now known as darsto [06:53:33] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [07:02:57] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [07:14:31] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [07:44:34] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [09:09:55] *** Quits: tomzawadzki (~tomzawadz@134.134.139.74) (Ping timeout: 260 seconds) [09:15:37] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [11:34:29] *** Quits: vermavis (~vermavis@192.55.54.41) (*.net *.split) [11:39:17] *** Joins: vermavis (~vermavis@192.55.54.41) [12:27:20] *** Parts: lhodev (~Adium@66-90-218-190.dyn.grandenetworks.net) () [14:04:16] *** Joins: Abbott_ (0cda5282@gateway/web/freenode/ip.12.218.82.130) [14:05:35] Hello there! Can I know if the [-q io depth] option in spdk perf is for each drive or for all the drives? Thanks. [14:21:21] Abbott_, not sure off the top of my head, will go look at the code real quick unless someone jumps in before I can tell :) [14:21:51] Abbott_: it is per drive [14:23:25] examples/nvme/perf/perf.c lines 772-775 - this loop iterates through each namespace and calls submit_io(ns_ctx, g_queue_depth) [14:24:00] and I was just about to say that... :) [14:33:13] Appreciate your help Jim. [14:33:32] and peluse ... [14:37:28] np... let us know if you have any other questions or run into something strange in trying to run some apps [15:04:17] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [16:31:50] Speaking of something strange.... I used spdk perf to test 1 to 12 NVMe drives and the latency decreases as the number of drives increases. Normally it should be the other way round. Any idea? [16:32:09] Below is the data and hope it forrmats correctly: [16:32:14] Latency(us) IOPS MB/s Average min max 1 Drive - Total : 1721.38 1721.38 298146.69 16724.58 303723.88 2 Drive - Total : 3433.87 3433.87 149255.88 18002.21 154307.72 2 Drive - Total : 3433.87 3433.87 149255.88 18002.21 154307.72 3 Drive - Total : 5145.77 5145.77 100316.60 15778.73 105489.48 4 Drive [16:33:42] that didn't come out very well [16:35:08] what size IO? [16:36:32] The data (https://shrib.com/#wNRbXv_1_boxYUs9s9AO) is for 1MB. And the same is for 4K also. [16:37:28] Sorry about the font. Did not align the columns well. [16:39:19] Cleaned it up now as best as I could... Thanks. [16:39:42] do you also have the 4K data that you can post? [16:40:58] Give me a minute will put it in that same link above. [16:41:18] the 1MB data definitely looks off - I'm looking at the perf.c code now [16:43:32] Thanks. Have uploaded both the 4K and 1MB Random read [16:43:42] I think this might be an overflow issue in the stats counting [16:44:02] we recently fixed the bdevperf tool to change some floats to doubles, but nvme perf is still using floats [16:44:04] That make sense... [16:44:29] are you using master or one of the release tags? [16:44:50] and how long are your test runs? [16:46:14] Think it is the master taken 2 weeks back. Let me check. [16:46:19] And each run is 60sec [16:46:34] queue depth? [16:49:53] Q=32 for both for 4K and 1M. [16:50:33] well I'm going to change these floats to doubles but I no longer think that's the problem [16:50:46] Sorry I could not find the spdk information. Any way to find from the spdk directory? [16:51:02] the version I mean? [16:51:08] if you just do 'git log' and tell me the first 8 characters in the commit ID [16:52:01] Sorry I used the web based git download and thus do not have that information.. [16:52:17] ok - no worries - I see the problem [16:52:41] the avg/min/max latency calculations are completely broken [16:52:51] :-) [16:53:57] it's 4:53pm here so I won't get to fixing this today - but thank you for bringing this to our attention - we'll get it fixed shortly [16:54:57] No problem. Take your time. Once fixed, would appreciate if you can update this chat. Will monitor and pick it up. Thanks for your help. [16:55:22] will do [17:01:26] To enable NVMeoF for SPDK, found two different steps to follow in two different webpages of the SPDK website - "make CONFIG_RDMA=y " in one and "./configure --with-rdma; make" in the other. Are both correct? [17:02:44] Target side... [18:09:04] What is the safe way to exit out of "app/nvmf_tgt/nvmf_tgt"? Ctrl-C? Reboot system? [18:11:55] Look like Ctrl-C is OK. [18:41:00] In the nvmf_target configuration file, if ReactorMask is set and Core option under Subsystem section is disabled, will Subsystem use what is set in ReactorMask? [19:07:39] *** Quits: Abbott_ (0cda5282@gateway/web/freenode/ip.12.218.82.130) (Quit: Page closed) [21:46:21] *** Joins: sethhowe_ (~sethhowe@192.55.54.40) [21:49:03] *** Quits: sethhowe (~sethhowe@134.134.139.83) (Ping timeout: 248 seconds)