[00:19:30] *** Quits: drv (daniel@oak.drv.nu) (*.net *.split) [00:32:39] *** Joins: drv (daniel@oak.drv.nu) [00:32:40] *** asimov.freenode.net sets mode: +o drv [01:13:56] *** Quits: drv (daniel@oak.drv.nu) (*.net *.split) [01:27:12] *** Joins: drv (daniel@oak.drv.nu) [01:27:12] *** asimov.freenode.net sets mode: +o drv [01:51:30] *** Quits: drv (daniel@oak.drv.nu) (*.net *.split) [02:05:04] *** Joins: drv (daniel@oak.drv.nu) [02:05:04] *** asimov.freenode.net sets mode: +o drv [03:00:26] *** Joins: pwodkowx (pwodkowx@nat/intel/x-frorzbjodkktoyvj) [07:38:33] Hi, I have a problem understanding how should I use bdev io splitting in blobstore. Should I get rid of batch/sequence in blobstore entirely? What about introduced dependency on bdev? [07:46:57] For sequences: Do I understand correctly, that when sequence call i/o operation, it is passed to bdev layer. Then, based on optimal_io_boundary it is splitted into smaller operations? [07:53:16] Another thing is, that currently _spdk_blob_request_submit_op_split requires to know diference of length to cluster boundary. That is something that we do not know in bdev layer, right? Or maybe we should set optimal_io_boundary to cluster size? [08:26:26] hi ppelplin [08:26:41] i think this would be used in the bdev/lvol driver [08:27:23] when registering an lvol, set it's optimal_io_boundary to reflect the cluster size - then the bdev layer will split read/write I/O that span a cluster boundary [08:36:11] peluse: are you using windows with samba shares during development? [08:37:40] *** Joins: klateck (klateck@nat/intel/x-ktlinkzzcrgdhzty) [08:40:24] no, Mac with Samba [08:41:25] think maybe an smb.conf setting? [08:41:43] yes, most likely [08:41:55] I hit the same issue long time ago. [08:42:42] I would recommend something that is able track file permissions like sshfs [08:43:49] you you still use samba? jimharris you do don't you? maybe shoot me you smb.conf file settings? [08:46:21] nope, I use x forwarding to launch everything on development mashine but display GUI locally [08:47:28] pwodkowx, OK, thanks. I think you're right though, looks like all I have to do is save in my editor over smb and it adds x permission. I'll try a few settings until it goes away. thanks [09:00:18] pwodkowx, thanks, got it fixed! [09:05:33] peluse: i only edit via linux terminals - i have remote access set up but its using nfs not samba [09:10:46] bwalker - did you see pawel's comments on your check format patch? [09:12:04] I am looking now [09:17:14] I can put a filter around certain filetypes that just knows they should not be executable [09:17:22] and then for the rest of the file extensions use the check I have in there [09:17:51] I'll have to pull out the file extension - shouldn't be hard [09:18:14] ok - cool [09:18:22] I wonder how much it will speed it up [09:20:04] *** Joins: travis-ci (~travis-ci@ec2-54-204-141-15.compute-1.amazonaws.com) [09:20:05] (spdk/master) test/vm_setup.sh: add missing assume yes option (Karol Latecki) [09:20:05] Diff URL: https://github.com/spdk/spdk/compare/6dbcb8931715...de2cd8382839 [09:20:05] *** Parts: travis-ci (~travis-ci@ec2-54-204-141-15.compute-1.amazonaws.com) () [09:26:20] oo, cut another second off the run time [09:26:29] 1.8 seconds [09:26:30] nice [09:26:35] pwodkowx++ [09:27:38] going to increase the set of file extensions that it doesn't check for a shebang [09:38:20] just pushed the new version - cut the time in half when all was said and done [09:43:39] lhodev: can you take a look at sethhowe's latest shared lib patch when you have a chance? [09:44:49] jimharris: Doing that right now in fact after having just sent Seth some email related to this work. [09:47:42] bwalker: thanks for the nvmf arch overview. I'm thinking if we were to use "Subsystem" and "Controller" names for what is currently called "target" and "device" in vhost, then we would confuse even more people from the vhost world [09:48:18] In iSCSI, Subsystem -> TargetNode, Controller -> Session, Namespace -> LUN [09:48:19] and I start thinking that DPDK's terminology of a "driver" that creates a "device" for each accepted socket connection is not that bad [09:48:32] more or less - there is an extra layer of something called a "bus" in iSCSI [09:48:47] iSCSI doesn't really have a bus [09:48:56] SCSI does [09:49:17] it's in the command, because they're scsi commands [09:49:19] but otherwise not used [09:50:07] I think "target" and "device" are probably fine, honestly [09:50:27] instead of "device", you could use "controller" to match nvmf or "session" to match iSCSI [09:51:21] "target" is slightly confusing because it's not quite the same as what we mean when we say "nvmf target" or "vhost target" [09:51:26] which is the whole collection of everything [09:51:36] i feel like "target" and "device" will not be clear unless you read documentation explaining them [09:53:13] it certainly took me a long time to wrap my head around the "subsystem" and "controller" terminology in nvmf [09:53:20] whereas the word "session" is immediately obvious [09:55:24] is there access control in vhost of any sort? or is it entirely based on the permissions to the domain socket [09:57:13] nope, there's no any access control in vhost [09:57:41] can a target has more than one block device? [09:57:44] i.e. namespaces/luns [09:57:51] s/has/have [09:58:15] vhost-blk is only one block device [09:58:22] vhost-scsi is a bus of scsi devices [09:58:52] oh, of course vhost-scsi [09:58:55] sending scsi commands [09:58:56] currently we put each block device as LUN0 on its own scsi device [09:59:29] I forget too much about SCSI to remember the pros and cons of using multiple luns or multiple bus [09:59:30] for vhost-scsi the name "controller" would fit - it is a virtual scsi controller after all [09:59:53] yeah - controller/session might be the better pair of names here [10:00:12] especially if some day we get vhost-nvme [10:00:14] i could live with a vhost-blk controller (even if its just one block device) [10:00:16] yep [10:00:34] actually, about the access control - the vhost-user spec doesn't really cover this whole area of connecting multiple initiators to a single socket [10:01:11] i don't think vhost-user spec even covers the concept of the socket [10:01:12] we could implement some sort of access control [10:01:44] it doesn't necessarily have to interfere with the protocol itself [10:01:52] no - just let anyone with permission to connect to the socket connect [10:02:00] no need to do anything more complicated [10:02:10] there is 1 socket per "target" right? [10:02:19] listen socket [10:02:29] right [10:02:41] so then it's just like nvmf - the "target" is just an access control list [10:02:59] it holds a list of block devices and defines which network interface it can accept connections on (in this case, the socket) [10:03:31] nvmf is slightly more complex here - it can have multiple "subsystems" (targets in vhost) listening at the same address on the network [10:03:39] so it uses host names to enforce the access control [10:03:56] but for vhost I don't think you need to go that far - just make each one have a unique socket [10:04:42] yep, that makes sense [10:05:50] what vhost calls the "driver" today we would call an "initiator" in iSCSI and the "host" in nvmf, right? [10:05:58] although we use initiator colloquially for nvmf a lot [10:06:20] because host means server for most people - it's just people that make PCIe devices that call the client-side the host [10:06:32] for nvmf its: [10:06:49] subsystem -> controller <-> nvmf initiator [10:07:05] and for vhost it is now: [10:07:16] target -> device <-> driver [10:07:25] yeah [10:07:53] "driver" isn't a terrible word in this case, but "initiator" would probably be more obvious [10:07:55] or "client" [10:09:13] server -> session <-> client is obviously the most intuitive for people not immersed in the storage world [10:09:36] in vhost-user spec the app which shares it's virtqueues is called "master" [10:10:09] so the set of "targets" is the "master"? [10:10:29] so with this terminology SPDK vhost is a slave, and e.g. QEMU is a master [10:10:41] oh [10:10:53] so the client is the "master" [10:11:08] well... yeah [10:11:09] that's an interesting choice of words [10:11:36] I sort of understand - the virtqueues are allocated on the client afterall [10:11:57] but the connection process starts by QEMU connecting to a socket that our vhost target is listening on [10:12:21] i.e. QEMU "initiates" [10:13:44] yeah - but there another also thing [10:14:15] vhost-user spec also uses names "client" and "server" [10:14:36] server is obviously the app that creates the socket file [10:14:47] and it can be either master or slave [10:15:37] which means that SPDK vhost could theoretically connect to external socket file [10:16:00] oh of like an already-running VM? [10:16:19] yep [10:16:57] then you could restart vhost app without restarting the VM [10:18:12] anyway, I believe client/server, master/slave names are out of the game as well [10:22:04] peluse: posted a comment on https://review.gerrithub.io/#/c/spdk/spdk/+/427559/ [10:23:14] sethhowe: Just a head's up that I got your email, am digesting that now and looking at your latest patches. [10:26:37] jimharris, ack.. will check it out thanks [10:28:40] dpdk actually keeps using: vhost driver -> vhost device <-> virtio PMD [10:29:16] this isn't accurate though, as virtio PMD isn't really a virtio driver [10:30:38] but what if we started to use vhost driver -> vhost device <-> vhost initiator [10:30:48] it's not really that bad, is it? [10:32:51] I mean, I can wrap my head around that if I think about it [10:33:00] but it isn't intuitive I don't think [10:33:04] neither was nvmf though [10:34:31] i really liked server -> session <-> client [10:35:13] but I can imagine myself trying to explain that "server" here can be also a vhost-user client, and "client" can be a vhost-user server [10:38:42] do you want me to write a fancy email with all those versions and their pros/cons, so we can decide on a final naming? [10:39:00] bwalker, jimharris? [10:39:00] well I think the more important part is figuring out who needs to be the decision maker [10:39:03] i'd be happy with a non-fancy e-mail too :) [10:39:14] these changes impact rte_vhost in DPDK, right? [10:40:10] i'm not sure if we introduce them to dpdk as well [10:40:32] i was thinking this naming just applies to spdk [10:40:33] we certainly could [10:40:40] for now [10:40:49] to our fork of rte_vhost? [10:41:00] won't that make it forever diverge, effectively? [10:41:06] no - to lib/vhost [10:41:27] I'm not well versed in exactly where the of demarcation is between spdk and dpdk here [10:41:28] our rte_vhost copy stays unchanged for now [10:41:51] so lib/vhost is what is implementing vhost_blk and vhost_scsi right [10:42:01] right [10:42:25] so there will be a mismatch of terms between that library and the rte_vhost code [10:42:27] then rte_vhost is what listens on a socket file and replies to received messages [10:44:16] (sharing memory and stuff like that) [10:50:45] my silence implies you're right [10:51:11] initially I wanted to introduce target/device naming to both lib/vhost in spdk and rte_vhost2 in dpdk (it's our poc reimplementation of rte_vhost) [10:51:28] well I think it's worth writing an email with pros/cons to us [10:51:41] keeping in mind that eventually, you're going to want to make these same arguments to the DPDK team [10:53:30] but if I saw a spdk_vhost_controller struct that contains a rte_vhost_device struct, it wouldn't confuse me too much [10:55:51] i'll be sure to write that email [11:00:53] would like feedback on https://review.gerrithub.io/#/c/spdk/spdk.github.io/+/427929/ - website updates to describe use of gerrit hashtags as part of core maintainer workflow [11:09:21] how do I tag a review that it needs a +1 from someone [11:09:32] is there a particular format we use [11:09:45] add hashtag and type "waiting for +1" [11:09:56] that specific string is what the url looks for [11:10:05] or "question" if you have a question about the patch [11:10:26] i'll add one for "needs rebase" [11:10:48] I just tagged the iscsi addition to spdkcli as needing a +1 from karol [11:12:51] sethhowe: I made a comment here that you should look at https://review.gerrithub.io/#/c/spdk/spdk/+/427807/ [11:12:52] ok - fyi - those hashtags will only apply to the "Needs Review Next" section [11:13:19] meaning if it already has a +2 from a core maintainer, it will still show up under "Needs my +2" or "Waiting for +2" [11:13:27] ok [11:24:32] bwalker: Thanks. I responded. Sounds like another good reason to build out a stub environment. Then we could do regression testing against that type of dpdk dependent stuff. [11:47:56] *** Joins: travis-ci (~travis-ci@ec2-54-163-170-225.compute-1.amazonaws.com) [11:47:57] (spdk/master) vhost: embed destroy ctx into vhost dev struct (wuzhouhui) [11:47:57] Diff URL: https://github.com/spdk/spdk/compare/de2cd8382839...97c45f56312b [11:47:57] *** Parts: travis-ci (~travis-ci@ec2-54-163-170-225.compute-1.amazonaws.com) () [16:49:10] *** Joins: travis-ci (~travis-ci@ec2-54-162-85-101.compute-1.amazonaws.com) [16:49:11] (spdk/master) crypto: change name of the crypto io_device (paul luse) [16:49:11] Diff URL: https://github.com/spdk/spdk/compare/97c45f56312b...9938bfaf03ba [16:49:11] *** Parts: travis-ci (~travis-ci@ec2-54-162-85-101.compute-1.amazonaws.com) ()