[00:25:24] *** Joins: mszwed (~mszwed@192.55.54.44) [05:17:06] *** Joins: nKumar (uid239884@gateway/web/irccloud.com/x-jiddathlazjtmhrw) [05:41:49] *** Quits: mszwed (~mszwed@192.55.54.44) (Ping timeout: 240 seconds) [07:08:29] *** Quits: tomzawadzki (tzawadzk@nat/intel/x-dhylqrlqjwadbyto) (Ping timeout: 240 seconds) [07:26:58] *** Joins: mszwed (~mszwed@192.55.54.42) [07:40:21] *** Joins: mszwed_ (~mszwed@192.55.54.42) [07:40:22] *** Quits: mszwed (~mszwed@192.55.54.42) (Remote host closed the connection) [09:23:26] *** Quits: mszwed_ (~mszwed@192.55.54.42) (Ping timeout: 240 seconds) [09:30:25] *** ChanServ sets mode: +o bwalker [09:40:36] So I noticed the NVMe hello world has not .conf file, is that because it calls spdk_nvme_probe() directly and doesn't use a bdev? Or, put another way, does use of a bdev *require* a .conf file or can that info be hardcoded into the aop? [09:48:09] right, everything in examples/nvme uses the NVMe driver directly, not the bdev, so they don't have .conf files [09:49:00] and you should be able to build an app that doesn't require a conf file; I think the apps that currently require it just check for that in the argument parsing code in each app [09:49:05] cool, so if an app uses the bdev layer then is it required to have a .conf or can you hardcode that info? [09:49:12] he, cross typing :) [09:50:00] do we have an example app that uses bdev w/o a conf file? [09:50:16] it may not be possible to configure everything programmatically yet - some things are purely controlled by the conf file right now [09:50:41] a lot of the test apps use a minimal conf that just turns on RPC and then uses the RPC interface to set up the bdevs [09:50:44] OK, I supose the app could create the conf file programatically and then pass it in... [09:51:01] ah yes, forgot we had an RPC i/f :) [10:04:01] *** Joins: jkkariu (~jkkariu@192.55.54.44) [10:12:49] attn blobstore uses: starting to plan/experiment with an API change so this is both a heads-up as well as a request for input. Nothing happening in the super near future on this but please see this trello card for more info: https://trello.com/c/0E3ADk7R [11:40:07] hmmm, I'm in _spdk_bs_load_super_cpl() and have a dirty super block (putting together a blobstore CLI so not surprising). There's a comment in there that this isn't supported yet (for some reason i thought it was) [11:40:53] anyhow, now what? :) This is on a real disk so do I need to re-init and close properly to continue or manually zero out LBA 0 or what? [11:40:58] for blobStore. In the Hello_blob example, a blob is created, opened and written to, but instead of closing the blob, you leave the spdk_blob open and use that to perfrom the sequential read. If one were to just have an isolated read function, is it necessary for the metadata thread to create another blob? my understanding is that in an isolated read function, I would want to just open an spdk_blob based on the blobid, [11:40:58] instead of creating a blob which inherently a new blobid is assigned, correct? [11:42:20] no, you wouldn't need to create another one. You just need to open the one that you know already exists as you state [11:42:33] got it, Thanks! [11:43:13] yeah, the hello blob example doesn't attempt to show multi-threaded work including the use of a MD thread... [11:43:56] no worries! Its still an awesome example, should be able to get a multithread implementation working. Thanks again! [11:44:44] FYI I'm also working on a CLI though that will have some more advanced features and examples but it will be a little while before its done. If you have feature suggestions the card is at https://trello.com/c/0E3ADk7R [11:53:03] ha, wrt my previous question turns out that our hello_world nvme example works nicely for smudging LBA 0 without changing a thing :) [12:01:52] FYI I added the handling of dirty super block as a big idea thing on Trello... [12:40:35] peluse: checking out your NVMe ut patches now - which one is actually first in the series? [12:40:45] the GerritHub UI makes it look like they're all mixed around in various different orders [12:41:07] looks like this one is first, maybe? https://review.gerrithub.io/#/c/372529/ [14:17:29] so to clarify, the spdk_bs_md_resize_blob function takes number of clusters as its unit for length. However, the spdk_bs_io_read/write take number of pages as its unit of length in the function call? [14:18:09] nKumar: yes - that is correct [14:18:16] resize can only be done in units of clusters [14:19:25] got it, thank you! should the page size/ and pages per cluster be defined when spdk_bs_init is called? [14:19:44] page size will always be 4KB [14:19:51] perfect, thanks! [14:20:29] cluster size is specified in the spdk_bs_opts structure [14:20:48] you can do something like this: [14:20:54] struct spdk_bs_opts opts; [14:21:07] spdk_bs_opts_init(&opts); /* this will set default values, including 1MB cluster size */ [14:21:22] beautiful, thanks!! [14:21:23] opts.cluster_sz = 2 * 1024 * 1024; /* if you want to change cluster size to 2MB */ [14:21:38] spdk_bs_init(dev, &opts, cb_fn, cb_arg); [16:27:24] sethhowe: just looking at some of these intermittent VM failures (SIGBUS in NVMe tests)... [16:28:19] we have 7 VMs with 16 GB of RAM each - does the host machine have enough RAM for that? [16:32:01] drv yep! I just double checked. There's plenty in this machine. [16:32:12] ok, hmm, guess that isn't the issue then [16:44:24] Q: I assume if we try to iter blobs that if there are none we'll get -ENOENT as the rc in the callback? [16:44:58] (looks like it in the code, just dbl checking) [16:49:21] hmm, based on the unit test, that looks like the intended behavior (although it could certainly use better comments) [17:02:31] Haha, thanks! [18:16:48] *** Joins: ziyeyang_ (~ziyeyang@192.55.54.40) [18:58:18] *** Quits: nKumar (uid239884@gateway/web/irccloud.com/x-jiddathlazjtmhrw) (Quit: Connection closed for inactivity) [22:20:25] *** Joins: mszwed (~mszwed@192.55.54.44) [23:10:39] *** Joins: tomzawadzki (tzawadzk@nat/intel/x-lytwsyebbdntkmye)