[03:11:20] *** Joins: guerby_ (~guerby@ip165.tetaneutral.net) [03:11:47] *** Quits: guerby (~guerby@april/board/guerby) (Ping timeout: 255 seconds) [07:16:35] *** Joins: lhodev (~Adium@inet-hqmc03-o.oracle.com) [07:58:36] FYI I've seen a few test failures lately unrelated to my patches.... "=> 112 run_test ./test/iscsi_tgt/iscsi_tgt.sh" is that a known issue? [07:58:53] *** Parts: lhodev (~Adium@inet-hqmc03-o.oracle.com) () [08:25:14] *** Joins: sethhowe (~sethhowe@192.55.55.41) [08:41:13] jimharris, bwalker - if there's some work w/blobstore on reset testing/handling I'm happy to sign up! The more I can learn between now and Sep 12 the better... [08:42:59] absolutely! I had your name in mind :) [08:43:18] nice! [08:44:05] just point me to the error injection code (I assume we are injecting stuff below at some level), I'll start digging in [08:44:07] i think we can get pretty good coverage with the unit tests without a lot of effort [08:44:19] way cool [08:49:45] *** ChanServ sets mode: +o bwalker [09:57:35] drv, question on one of your comments in https://review.gerrithub.io/#/c/372541/ when you get a min [09:57:46] sure [09:58:00] its in the review.. :) [09:58:54] for the memcmp() of the nvme_payload struct itself, you should just use sizeof(payload) [09:59:07] the payload_size in the nvme_request is the size of the buffer that the nvme_payload points to, not the struct itself [09:59:22] the naming is a little bit confusing... [09:59:32] haha, I see what you're saying now. thanks [09:59:40] will fix! [09:59:42] thanks [09:59:46] (see my comments, followed quickly by "nevermind") [09:59:50] it's definitely confusing [10:04:06] yeah, the review comments are making it less confusing moving forward I hope, thanks! [10:07:57] OK, one down. Looks like the process for editing all patches in the chain is working good [10:49:59] *** guerby_ is now known as guerby [10:50:35] *** Quits: guerby (~guerby@ip165.tetaneutral.net) (Changing host) [10:50:35] *** Joins: guerby (~guerby@april/board/guerby) [11:23:07] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: Textual IRC Client: www.textualapp.com) [11:32:25] *** Joins: lhodev (~Adium@inet-hqmc01-o.oracle.com) [11:43:34] we've modified the way build output is copied to the ci.spdk.io server, so results should be immediately available as soon as the build finishes now [11:48:19] the IRC log is also syncing much more quickly [11:48:27] for instance, drv's comment above this is already sync'd [12:06:24] impressive :) [12:56:41] bwalker, drv: instead of rebasing the nvmf series to make the comment changes, couldn't we just add this as another patch at the end of the series? [12:57:06] well - I guess there's some other rebases needed anyways [12:57:41] that's why I was waiting to rebase until the reviews were done [12:57:49] I'm not convinced there won't be other comments anyway [12:58:12] yeah - I was thinking it's a shame to rebase the whole series just for a comment change but you'll need to do it anyways because of some merge conflicts [13:00:00] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [13:02:25] I just rebased the first 4 and incorporated any comments that were left [13:02:34] *** Parts: lhodev (~Adium@inet-hqmc01-o.oracle.com) () [13:02:55] *** Joins: lhodev (~Adium@inet-hqmc01-o.oracle.com) [13:07:32] jimharris: do you really need virtio_ethdev.h/c? [13:09:30] the contents of those files are needed but the "eth" stuff needs to get factored out [13:09:38] i listed that as one of the todo items [13:10:06] that file now mostly has the logic for adding descriptors to the ring and fetching completions off of them [13:10:12] ring => virtqueue [13:10:36] ok, next question [13:10:51] actually - strike that, it does have a lot of code that's needed but the rxtx stuff isn't in that file [13:10:54] shoot [13:10:55] this is just a bdev module - does it make sense to do the initiator as a library separately? [13:11:24] sure - that's definitely possible [13:14:08] i can add both of those to the todo list in the README [13:14:32] i already mention "eth" but can put more specifics that its not just the API but the filenames too [13:39:33] bwalker: https://review.gerrithub.io/#/c/374360/ [13:40:39] are you pointing out the failure? it happened on the patch before it too [13:40:49] yeah [13:41:02] I can't explain it just yet - the error is that somehow it can't find a route to the IP I want to connect to [13:41:34] it may have more to do with how it's shutting down in the test before the one that fails [13:42:00] it passed on the physical NIC too - just failed on the soft roce one [13:42:55] I'm not confident that the connections all get correctly cleaned up on shutdown in every case [13:43:28] but I'd prefer to fix it after the refactoring series if I can get away with that [14:25:31] *** Quits: lhodev (~Adium@inet-hqmc01-o.oracle.com) (Remote host closed the connection) [14:32:15] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [14:32:52] bwalker: blobstore definitely needs a readv/writev api [14:33:17] looking at the lvol code now - vhost will definitely give us scattered payloads [14:33:58] peluse: ^ [14:34:00] i think i'll look at this from an iscsi context first [14:34:02] yeah, bdevs generally ought to support readv/writev - we don't have any way to indicate they don't right now [14:34:03] lol [14:34:34] yeah - I think they have to - it shouldn't be an option (unless we want to make vhost double-buffer - yuck) [14:34:37] (RBD is currently broken, c.f. https://trello.com/c/i5NbJkWF) [14:37:34] 99% of the time we should just be able to pass an iov straight through [14:37:47] but anything crossing a cluster boundary will require splitting somewhere [14:38:45] blobstore will split across cluster boundaries [14:39:08] for single buffer payloads [14:39:15] splitting an iov array is a bit messier [14:39:49] oh, yeah the iov code will have to split just like the single buffer code [14:39:58] which is a little bit more complicated [14:40:01] but not terrible in general [14:40:15] I think we do it callback based like we did in the nvme driver [14:40:58] hmmm - [14:41:14] no, that won't work [14:42:25] so for logical volumes, if clusters are on the order of 1GB, we should almost never have to split [14:42:39] so if we just allocate separate iov arrays and do iov copies, that should be fine [14:57:52] *** Joins: lhodev (~Adium@inet-hqmc08-o.oracle.com) [15:02:32] *** Joins: patrickmacarthur (~smuxi@2606:4100:3880:1240:9922:701e:33a4:d270) [15:53:09] bwalker, cool, want me to tackle that? [18:23:57] *** Quits: jkkariu (~jkkariu@192.55.54.44) (Ping timeout: 240 seconds) [18:24:07] *** Joins: jkkariu (~jkkariu@192.55.54.44) [18:37:09] *** Quits: lhodev (~Adium@inet-hqmc08-o.oracle.com) (Quit: Leaving.)