[01:11:31] *** Joins: pwodkowx__ (pwodkowx@nat/intel/x-tgoufkzchntlrmwb) [01:18:23] *** Quits: pwodkowx__ (pwodkowx@nat/intel/x-tgoufkzchntlrmwb) (Quit: Leaving) [02:03:28] *** Joins: pwodkowx__ (pwodkowx@nat/intel/x-dhvionnushlrwadu) [02:03:42] *** Quits: pwodkowx__ (pwodkowx@nat/intel/x-dhvionnushlrwadu) (Client Quit) [02:14:43] *** Quits: pwodkowx (~pwodkowx@192.55.54.36) (Quit: ZNC - http://znc.in) [02:28:12] *** Joins: pwodkowx (~pwodkowx@192.55.54.36) [05:07:01] *** Quits: pwodkowx_ (pwodkowx@nat/intel/x-gehkussnkdrrkvwr) (*.net *.split) [05:07:02] *** Quits: sethhowe (~sethhowe@192.55.55.41) (*.net *.split) [05:07:02] *** Quits: pniedzwx (pniedzwx@nat/intel/x-fzffxfkqtmomoifd) (*.net *.split) [05:07:02] *** Quits: klateck (~klateck@192.55.54.36) (*.net *.split) [05:22:08] *** Joins: pwodkowx_ (pwodkowx@nat/intel/x-gehkussnkdrrkvwr) [05:22:08] *** Joins: sethhowe (~sethhowe@192.55.55.41) [05:22:08] *** Joins: pniedzwx (pniedzwx@nat/intel/x-fzffxfkqtmomoifd) [05:22:08] *** Joins: klateck (~klateck@192.55.54.36) [05:59:53] *** Joins: pwodkowx__ (pwodkowx@nat/intel/x-bpfxwnsjdfqvlsvz) [05:59:53] *** Quits: pwodkowx_ (pwodkowx@nat/intel/x-gehkussnkdrrkvwr) (Remote host closed the connection) [06:32:22] *** Joins: pwodkowx_ (pwodkowx@nat/intel/x-webcpxgrtxmpxxuq) [06:34:57] *** Quits: pwodkowx__ (pwodkowx@nat/intel/x-bpfxwnsjdfqvlsvz) (Ping timeout: 248 seconds) [07:11:42] *** Quits: pwodkowx_ (pwodkowx@nat/intel/x-webcpxgrtxmpxxuq) (Ping timeout: 252 seconds) [07:13:15] Hi guys, I would like you to have a quick look at these two patches: https://review.gerrithub.io/#/c/375764/ and https://review.gerrithub.io/#/c/376382/ [07:13:41] it's still work in progress, but I would like to know if this is a good aproach [09:50:14] drv: re: cntlid wraparound. Do you think it is a good assumption that there will be a relatively small number of controllers per subsystem at any given time? [09:50:26] i.e. only a couple of clients will actually be connected to a subsystem at once [09:50:51] if we can avoid making assumptions like that, it would be better for scaling - but we can do the simple thing for now and improve it later [09:50:59] if yes, then I propose the algorithm is a linear search for a free cntlid starting at 0 each time - if the list is only 1 or 2 elements that's the fastest way [09:51:18] otherwise, I like your idea to use a bit mask [09:51:22] it's an extra 8k per subsystem [09:51:27] it would also be nice to avoid reusing a CNTLID until absolutely necessary, so we can avoid things like a host reconnecting to the wrong session [09:51:28] that probably doesn't matter [09:52:01] I was thinking about that - if we reuse a cntlid that was used recently it could be confusing [09:52:12] but how do I track "recently" without a much heavier data structure [09:52:36] you can still use the bit array - just keep a "last allocated CNTLID" like you were doing already, but use that as a starting point in the free cntlids bit array [09:52:42] and if you don't find one, start over from 0 [09:52:54] but there are scenarios when even that messes up [09:53:04] for instance, say I have cntlid 0 and it is very long lived [09:53:06] yeah, I guess that falls apart later on [09:53:25] I guess avoiding cntlid reuse is not that critical [09:53:31] then I have another ctrlr connecting/disconnecting over and over [09:53:35] as long as the host is behaving correctly, it shouldn't matter [09:53:38] and right before it gets back to 0, cntlid 0 disconnects [09:54:09] so if we don't care about reuse of cntlid, because hosts should behave correctly [09:54:19] then why not search from 0 every time? [09:54:29] that's what I'm saying, yeah [09:54:42] ok - that's what I'm going to implement [09:54:54] I won't use the bitmask just yet - I'm not sure it's necessary [09:54:59] we can always improve it later - just make sure to wrap the cntlid allocation in a function that we can swap out for something better later [09:55:26] if we have a scenario where we have thousands of controllers in a single subsystem then we can use the bit mask [09:59:42] I have to make it work with your patch that restricts the range of cntlids [10:08:24] you can do yours first if you prefer - I can rebase mine on the new code [10:08:31] or just squash it in with your new changes [11:44:18] drv: where is discovery_log_page allocated with a variable size? [11:44:33] it looks to me like it's always allocated as the same size [11:45:22] I know that size of that page can be variable but don't we always just allocate a single page and return the data 4KB at a time? [12:40:54] jimharris: in nvmf_update_discovery_log(), we generate the log page entries and realloc() each loop iteration [12:41:25] it could probably be preallocated to some max size instead, but it's only modified when a new subsystem is added, so it's probably not a big deal [12:46:59] to be clear, discovery_log_page contains the entire pre-generated discovery log, not just the current 4K piece [12:47:15] and the get log page code just copies parts of it out as requested by get log page offset/length [12:47:24] ah - I missed the realloc() - never mind then and carry on [13:17:50] this is an initial cut at removing all of the duplication between split, gpt and error bdev modules [13:17:52] https://review.gerrithub.io/#/c/376423/ [13:18:21] still needs a lot of work but any feedback on what's there so far would be great [13:25:54] looks good to me at first glance [14:22:10] removes almost 200 lines from gpt [14:28:25] nice [23:54:46] *** Joins: pwodkowx_ (pwodkowx@nat/intel/x-pcoxbogarootawxz)