[00:25:54] *** Joins: Vikas_Aggarwal (73719c02@gateway/web/freenode/ip.115.113.156.2) [00:56:25] *** Quits: baruch (~baruch@141.226.162.100) (Read error: Connection reset by peer) [01:19:43] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [01:49:28] *** Joins: baruch (~baruch@bzq-82-81-85-138.red.bezeqint.net) [02:32:50] *** Joins: tklsk (86bfdc49@gateway/web/freenode/ip.134.191.220.73) [02:40:08] *** Joins: sbasierx__ (~sbasierx@192.198.151.43) [02:40:19] hi everyone [03:12:02] *** Joins: tomzawadzki (~tomzawadz@134.134.139.72) [03:36:51] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [04:03:53] *** Joins: ziyeyang_ (~ziyeyang@134.134.139.74) [04:05:08] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.74) (Client Quit) [04:23:57] *** Quits: tklsk (86bfdc49@gateway/web/freenode/ip.134.191.220.73) (Quit: Page closed) [04:44:49] sbasierx__: ola [05:15:39] Hello [05:15:56] Have a question on rocksdb.conf [05:16:38] Can I use /dev/ram0 (linux ramdisk) as a raw blockdevice in rocksdb.conf ? [05:17:18] ... want to try db_bench traffic but i don't have access to nvme ssd this moment [05:26:59] *** Quits: sbasierx__ (~sbasierx@192.198.151.43) (Quit: Going offline, see ya! (www.adiirc.com)) [05:31:16] *** Joins: sbasierx__ (~sbasierx@192.198.151.43) [06:09:05] Vikas_Aggarwal: Hey. I'm not familiar with rocksdb whatsoever, but if it works with nvme, it should work with other backends as well [06:09:43] you should be able to use raw blockdevice via Linux AIO or use SPDK malloc (ramdisk) [06:11:27] http://www.spdk.io/doc/bdev.html Here's the guide on how to setup particular backends [06:13:49] darsto: Thanks. Following step has put me into some doubt : [06:14:08] http://www.spdk.io/doc/blobfs.html#blobfs_rocksdb [06:14:15] Append an NVMe section to the configuration file using SPDK's gen_nvme.sh script. [06:14:26] scripts/gen_nvme.sh >> /usr/local/etc/spdk/rocksdb.conf [06:15:00] I can't run gen_nvme.sh as my server has no nvme ssd. [06:15:52] I agree it looks misleading [06:16:28] let's ping jimharris and bwalker [06:17:03] could we rework that guide a bit? [06:17:29] Sure we can [06:36:27] *** Quits: sbasierx__ (~sbasierx@192.198.151.43) (Quit: Going offline, see ya! (www.adiirc.com)) [06:45:37] *** Quits: baruch (~baruch@bzq-82-81-85-138.red.bezeqint.net) (Ping timeout: 248 seconds) [07:11:35] *** Quits: Vikas_Aggarwal (73719c02@gateway/web/freenode/ip.115.113.156.2) (Quit: Page closed) [08:44:12] Hi, can you merge this https://review.gerrithub.io/#/c/371864/ ASAP? [08:46:24] *** Joins: lhodev (~Adium@inet-hqmc03-o.oracle.com) [08:46:50] *** Parts: lhodev (~Adium@inet-hqmc03-o.oracle.com) () [09:32:29] pwodkowx, darsto: can we merge https://review.gerrithub.io/#/c/371864/ separately from the SCSI hotplug fix it was pushed on top of, or do we need both? [09:32:58] currently, the previous patch (https://review.gerrithub.io/#/c/371863/) has a newer revision, and the vhost hotplug one will need to be rebased either way [09:36:31] *** Quits: tomzawadzki (~tomzawadz@134.134.139.72) (Ping timeout: 252 seconds) [10:01:24] drv: it can be merged separately [10:01:28] *** Guest3057 is now known as darsto_ [11:09:02] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [11:54:01] bwalker: can you take another look at https://review.gerrithub.io/#/c/390200/? [11:54:26] the idea here was that for things like lvols, we always register a unique name (like a uuid) and then we can create aliases based off of that [11:54:56] so then if user wants to change a friendly name for a logical volume, it only affects the alias list [12:00:24] drv: thanks for https://review.gerrithub.io/#/c/393214/ - i've been meaning to make that change for a while [12:49:46] sethhowe: the spdk test pool status page says centos6 runs bdev tests - but the timing charts indicate otherwise [13:03:50] *** Joins: lhodev (~Adium@inet-hqmc04-o.oracle.com) [13:03:55] *** Parts: lhodev (~Adium@inet-hqmc04-o.oracle.com) () [13:25:55] *** Joins: James (~James@208.185.211.2) [13:26:19] *** James is now known as Guest94352 [14:01:13] jimharris, where are the timing charts again? [14:02:35] there is a timing.svg in each test system's results directory [14:04:15] thx [14:07:22] jimharris, is that were were noted the 12 sec runtime for https://review.gerrithub.io/#/c/387556/ or were you looking at timestamps in the log and either way how did you know which test system this test was running on? (sorry for all the Q's) [14:08:24] i was looking at fedora-03 specifically - that test system is currently the long pole in the tent and runs nvme-of tests (which this patch touches) [14:09:51] autotest/nvmf_tgt/host/perf is 7 seconds on master currently but this patch increases it to 19 seconds [14:10:03] got it, thanks. yeah I see the test machine description on the CI page [14:10:45] if we need to bump it up we can, but 12 seconds here and there really adds up quickly - it's not clear to me that this test is worth the extra 12 seconds per patch [14:12:27] yeah, I agree I'm just trying to figure out what to get in the habit of looking for on some of these... so where is there output for master at any given time or did you just look at another patch touching some unrelated area to see the timing chart w/o this patch? [14:15:03] there's a link to the latest master build on the top of ci.spdk.io [14:45:38] thanks [14:46:18] bwalker, is the GH suggested link for reviews on http://www.spdk.io/development/ what we want to use? You had suggested something newer I think a few months ago that we could only bookmark and I've since rebuilt my VM so don't have that URL anymore [14:47:01] hmm, it's probably a fine one to use. I'll check to see what it looks like [14:50:12] trying to figure out why, when I use that one, I'm not seeing some thing that I figured I should see like https://review.gerrithub.io/#/c/393129/ doesn't show up in that list [14:52:11] on some of the queries, we hide reviews that are assigned to someone specifically [14:52:18] not sure we're doing that with the general query [14:52:31] as soon as that one is reviewed by drv, it should appear for everyone [14:53:06] that odd, i see other's that are assigned specifically to someone... [14:53:21] (using that same review query on the dev page) [14:53:44] I have to look at the query, but if the person assigned has voted it appears [14:53:50] for the maintainer query we use [14:54:01] hmm, I'm not sure that's a great idea (I haven't actually switched to the new query string yet) [14:54:52] yeah, seems like we should just be showing everything that needs review - not sure I see a reason to ever 'hide' a patch? [14:55:28] on the maintainer query they all show in one section [14:55:38] but there is a section above that which filters [14:56:00] the number of patches can be pretty overwhelmining - we're trying to come up with ways to focus on the reviews that are most important first [15:00:13] we had that "Important Reviews" query that I still have but it never made it, I don't think, to spdk.io so doubt anyone else has it, the one that lists things that are starred by maintainers... [16:14:13] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [16:36:33] *** Quits: Guest94352 (~James@208.185.211.2) (Remote host closed the connection) [16:48:54] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [17:29:36] *** Joins: James (~James@73.93.140.61) [17:30:06] *** James is now known as Guest9242 [17:34:07] *** Quits: Guest9242 (~James@73.93.140.61) (Ping timeout: 248 seconds) [18:26:24] *** Joins: ziyeyang_ (ziyeyang@nat/intel/x-kgmkykwspflkzvrs) [18:28:27] *** Joins: James (~James@2601:640:8300:10f3:1c7f:f779:d77c:8d51) [18:28:50] *** James is now known as Guest35559 [22:35:56] *** Quits: Guest35559 (~James@2601:640:8300:10f3:1c7f:f779:d77c:8d51) (Remote host closed the connection) [23:25:29] *** Joins: James (~James@2601:640:8300:10f3:1c7f:f779:d77c:8d51) [23:25:53] *** James is now known as Guest61101