[02:31:31] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [04:40:14] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: Textual IRC Client: www.textualapp.com) [07:15:48] [07:29:27] *** Quits: tomzawadzki (~tzawadzk@192.55.54.36) (Ping timeout: 260 seconds) [08:21:01] *** Quits: peluse_ (~peluse@2600:8800:140a:ae18:d1a4:a887:24cb:eff3) (Remote host closed the connection) [08:32:17] bwalker, drv: can you look at this patch for the 17.07 release? https://review.gerrithub.io/#/c/371021/ [09:54:51] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [09:58:27] *** Joins: jankryl (~textual@ip-94-113-131-57.net.upcbroadband.cz) [10:32:50] jimharris: any idea how the cache buffer size can impact the randread test? [10:32:56] I don't see how that is possible [10:33:16] yeah - I'm not understanding that either [10:33:28] that's why i asked to look at his patch [10:34:15] i'll need to look at the cache buffer setup code some more [10:34:59] like somehow decreasing the cache buffer size makes us allocate more memory for blobfs cache, which means less memory for the rocksdb block cache? [10:36:06] but the rocksdb block cache isn't greedy - it's fixed size [10:36:08] specified by the user [10:36:12] exactly [10:36:58] there should be minimal if any difference allocating 4X 64K buffers for the mempool instead of X 256K buffers [10:39:58] CACHE_READAHEAD_THRESHOLD is not associated at all with CACHE_BUFFER_SIZE, so we shouldn't be doing any extra readahead I/O by shrinking the cache buffer size [10:41:15] yeah I can't explain it [10:42:01] *** Quits: nKumar (uid239884@gateway/web/irccloud.com/x-gzlyhiprjieubqge) (Quit: Connection closed for inactivity) [10:43:32] jimharris: re: vhost number of devices patch above, I agree that we should increase it, but is it necessary for the release, or can it wait? [10:47:02] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [10:57:26] peluse: got around to reviewing the blobstore hello_world patch - you did exactly what I was looking for [10:57:42] I made comments on the details, but you basically nailed it [11:00:12] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [11:05:34] *** Quits: jankryl (~textual@ip-94-113-131-57.net.upcbroadband.cz) (Quit: Textual IRC Client: www.textualapp.com) [11:52:35] drv: I guess it can wait - it's the only patch the vishal needed for all of the vhost work he's been doing in prep for the FMS demo [11:55:41] bwalker, cool, thanks. I'll take a look and be in tomorrow afternoon (not this afternoon) and we can chat some more about the API. Thanks man! [13:44:33] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [13:52:10] anything else release related that needs to get reviewed? [13:52:26] if not, I want to go ahead and mark v17.07-rc2 [14:09:04] *** Joins: peluse_ (~peluse@2600:8800:140a:ae18:41ef:3342:fc78:9c7f) [14:12:19] bwalker, want me to handle the NVMe fix? I actually have time :) [14:13:13] sure if you want to [14:13:27] there is a real quick fix, and then the longer, involved "right" fix [14:13:50] OK, I'm on it then. What are your ideas? [14:14:09] in examples/nvme/fio_plugin/fio_plugin.c [14:14:16] in the function probe_cb [14:14:22] just delete the code that sets the queue depth on the options [14:14:31] and make that function just return true [14:14:39] that will make it so we allocate the default size queue pairs always - which are 128 [14:14:57] OK, I'll give that a quick try. Thanks! [14:14:59] I just looked, it's actually 256 by default, so even less likely to be a problem [14:15:22] but yeah, the real fix would involve creating a new I/O queue pair at the start of each job, with the correct queue depth [14:15:40] rather than once up front like we do now [14:15:46] but to get the code to a point where it is ready to create a new I/O queue for each job, you have to restructure a whole bunch of stuff [14:15:53] because we aren't doing the right things in response to the right fio callbacks [14:16:21] got it [14:16:33] I just confirmed, just deleting that code from probe_cb fixes it [14:17:10] so tradeoff here is just always using a pretty large NVMe qpair depth and wasting a pretty small amount of resources versus getting the nvme q depth to match the job requirements every time right? [14:17:19] yeah [14:17:24] I'll send a patch to the guy who is seeing this and confirm it works for him then submit a patch [14:17:35] way better to waste that tiny amount of extra memory per queue than what we have now [14:18:49] yup [14:27:49] I'll put that on Trello TODO list... [14:37:34] FYI confirmed the temp fix on the system I used here to repro... also added the trello item [14:42:58] quick fix pushed also [15:35:34] bwalker, so wrt that one place in the hello_blob where I used an event that wasn't required, we have at least one other sample app that did that so I wasn't sure if we were trying to promote small chunks of work all event driven as a pattern or if that person just wasn't paying attention :) [15:36:18] I'll make the call directly and throw a comment in there and you guys can see if you like it or not. thanks for all the other comments too, you answered most of my questions with your questions [16:05:57] bwalker, yeah this hello will be 10x better with your suggestions. Heading to see UB-40 here in a few so will probably not finish til tomorrow... [16:06:19] red red wine [16:42:22] oh man, they have a ton of hits and the original singer is with them... I'm outta here! [19:19:32] *** Quits: jstern (~jstern@192.55.54.44) (Ping timeout: 260 seconds) [19:19:32] *** Quits: qdai2 (~qdai2@192.55.54.44) (Ping timeout: 260 seconds) [19:19:43] *** Joins: jstern_ (~jstern@192.55.54.44) [19:20:07] *** jstern_ is now known as jstern [19:23:07] *** Joins: qdai2 (~qdai2@192.55.54.44) [20:50:43] *** Joins: ziyeyang_ (~ziyeyang@192.55.54.42) [23:09:50] *** Joins: tomzawadzki (~tzawadzk@192.55.54.45) [23:59:30] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl)