[00:21:57] *** Quits: tomzawadzki (tzawadzk@nat/intel/x-lytwsyebbdntkmye) (Remote host closed the connection) [00:55:57] *** Quits: ziyeyang_ (~ziyeyang@192.55.54.40) (Quit: Leaving) [00:56:25] *** Quits: mszwed (~mszwed@192.55.54.44) (Remote host closed the connection) [05:26:16] *** Joins: mszwed (~mszwed@134.134.139.76) [07:50:20] [07:57:38] *** Joins: nKumar (uid239884@gateway/web/irccloud.com/x-wnmtdtnuoscqxfwy) [07:58:56] *** Quits: mszwed (~mszwed@134.134.139.76) (Ping timeout: 240 seconds) [07:59:56] So, given that for blobstore, the metadata thread is the only one that can create/open/close blobs. Is it really possible to make the most of the having multiple IO worker threads to actually make the most of the multiple channels for the read/write? From my understanding, even if there was some type of SPMC queue where blobs are created/opened by MD thread and then a pool of IO threads to consume this queue, they would [07:59:56] still have to return a to the Metadata thread in order to actually close the blob. [08:20:14] I'm not quite sure what the question is but yes the current design is intended to have one MD thread and a bunch of IO threads where an expected usage model is heavy on IO and light on MD operations [08:21:53] and the proposed API change doesn't change that a *whole lot*. Instead of having a MD thread, you just have "threads" but if you want to do a MD operation you can only have one thread exclusively access the blob. So the concept is the same but the implementation is thought to be easier for apps to manage [09:12:49] nKumar - today you can open a handle just once and then use it from multiple threads [09:13:01] so if you're holding the blob open for awhile, using multiple threads makes sense [09:13:22] but we have a few API clarifications coming that will let you call open on any thread you want [09:13:36] which I think makes a lot more intuitive sense [09:16:28] Here is the basics of what Id like to acomplish [09:16:28] Lets say I have (just for the sake of argument) A queue of 1 unique blobs that all need to be written to a disk. An MD thread can create 10 spdk_blobs, open them and place them into a queue, lets say queue A, for consumption by N IO threads (the message would have blob, payload, blobid, bs). [09:16:28] if I have a pool of "IO threads" that consume from this queue, once they open a channel, write the blob in the message, and close a channel, they then have to pass a message back to the original MD queue to then close the blobs that were opened, correct? [09:16:40] a queue of 10** unique blobs [09:20:33] * peluse thinks that makes sense but is also awaiting bwalker's response :) [09:25:26] bwalker, I'm also wondering out loud just how expensive incorporating the required lock to mix MD and IO seamlessly from the app perspective would be on, for example, some of our best RocksDB metrics? Might be worth experimenting with? [09:25:39] locks, plural of course [11:44:43] *** Joins: patrickmacarthur (~smuxi@2606:4100:3880:1240:39f2:4fd8:2ba:3d8) [11:48:18] hello all, is the SPDK nvmf target able to run as a secondary process? the commit logs seem to suggest that it should, but I am having issues getting it to launch successfully. [12:14:29] patrickmacarthur, I'm not familiar with that component but someone will chime in soon, in the meantime if you could describe the issues I think that would help [12:39:08] patrickmacarthur: nvmf_tgt should work as a secondary process if you specify a -i instance ID argument [12:39:12] what app are you using as the primary? [12:40:45] I am using an existing DPDK primary application that I have written, and (attempted to) adapt to SPDK [12:41:57] the SPDK multi-process support uses a specific --file-prefix argument that you would need to match with your DPDK app, but otherwise it should work (in theory) [12:42:00] what issue are you hitting? [12:42:29] After I posted I tried to reproduce the error and instead got a different one which I think may be a linker issue, since rte_mempool_lookup() is segfaulting because the tailq isn't initialized [12:48:59] bwalker, pls confirm: in my prev hello world the pattern of bailing out of any cb on error was to call spdk_stop_app() and return at which time I'd try to clean up unloading the bs, freeing mem, etc. Assuming that I do indeed have to unload the bs (I assume it makes sense in at least one error case somewhere) then I can't stop the framework and then unload, the unload pukes. In the latest patch I address this [13:02:53] FYI posted blob cli example/utility app and will update often, it's not done but so far does init bs, create blob, list blobs, list bdevs and requires NVMe. Card listing features is at https://trello.com/c/0E3ADk7R [13:33:58] ok I got nvmf_tgt to work as a secondary process; sorry for bothering you. Turns out my app was compiled against the DPDK shared libraries but SPDK was compiled against the static libraries. [13:40:05] heh, everyone here is either seeking help or providing it, nobody is bothered by anything :) [14:09:40] drv, for some of the stuff you suggested for the blob_cli that isn't exposed via public API I'm thinking I'll still dump it but I'll categorize it as debug/private info and after its all said and done we can decide if any of it makes sense to expose via public API or not [14:10:19] ok, sounds good to me [16:29:52] bwalker, I may be missing something but it doesn't appear to be that the super blob ID stored in MD is retrieved upon bs reload. The UT only tests set/get in mem. In the CLI I'm setting one and then later going to get it and it's not there [16:30:41] I'm about to board a plane, so only here for a minute, but it should get written out when you do an unload [16:30:48] I look in _spdk_bs_load_super_cpl() , the callback for readint the superblock on load and the right value is in memory but isn't copied to ctx->bs->superblobid (or whatever the lement is) like many other values are [16:30:59] that may be a bug [16:31:09] yeah, it gets written out, its not being read back in on load (see note I typed while you were tpyin) :) [16:31:16] not urgent obviously, can wait til you get back [16:31:49] you can add the 1 line patch to copy the value into bs->super_blobid after it is read in [16:32:03] that seems like a simple oversight [16:32:18] bwalker, thanks yeah wanted to confirm that this looked like a bug. easy enough! have a safe flight... [16:32:48] thanks - I'll try to answer the other questions above as soon as I have a chance to sit at a computer for more than 5 minutes [16:36:25] yup, no hurry... [19:05:52] *** Quits: nKumar (uid239884@gateway/web/irccloud.com/x-wnmtdtnuoscqxfwy) (Quit: Connection closed for inactivity) [23:19:19] *** Joins: mszwed (~mszwed@134.134.139.78)