[01:10:11] *** Joins: tomzawadzki (~tomzawadz@192.55.54.42) [02:17:39] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [04:11:18] *** Quits: drv (daniel@oak.drv.nu) (Ping timeout: 276 seconds) [04:12:58] *** Joins: drv (daniel@oak.drv.nu) [04:12:58] *** ChanServ sets mode: +o drv [06:29:44] *** Quits: sethhowe (sethhowe@nat/intel/x-qqbflnldfwnpghmq) (Remote host closed the connection) [06:30:02] *** Joins: sethhowe (~sethhowe@192.55.54.38) [08:05:58] *** Joins: lhodev (~Adium@inet-hqmc07-o.oracle.com) [08:33:37] *** Quits: sethhowe (~sethhowe@192.55.54.38) (Remote host closed the connection) [10:02:27] *** Quits: tomzawadzki (~tomzawadz@192.55.54.42) (Ping timeout: 240 seconds) [10:04:52] jimharris: if we're going to store a reverse relationship from base blob to thin provisioned blob in memory, how do we build that up on start up? [10:04:57] is that in blobstore or the lvol library? [10:05:07] (blobstore doesn't open all blobs at start up because we have the masks) [10:05:35] we'll just have the thin provisioned to base blob relationship [10:05:47] not the reverse [10:05:53] but if someone deletes a base blob, how do you know if that is allowed? [10:08:59] we'll need to mark blobstores that are using thin provisioning and walk the blob list during load [10:09:12] *** Guest83814 is now known as darsto [10:09:15] let's chat more after this meeting [10:10:52] bwalker: "The [10:10:53] bdev provided is allocated by the module and must be filled out appropriately." is this a valid sentence? [10:11:25] either way it sounds weird [10:17:08] hmm, it's valid but it does sound weird [10:17:13] let me rework that one [13:47:05] bwalker: ping on https://review.gerrithub.io/#/c/395039/ - is this still needed? [13:50:29] it's really working about a bug - spdk_nvmf_tgt_listen isn't asynchronous [13:50:45] but it kicks off a background operation [13:50:58] so if you do spdk_nvmf_tgt_listen(), then spdk_nvmf_subsystem_add_listener(), they're racing [13:51:39] not in the data integrity sense, but in the sense that transports can be placed in the poll group ahead of when you'd expect [13:51:47] so his patch is really fine for now [13:51:56] the "right" solution is to make spdk_nvmf_tgt_listen actually asynchronous [13:52:03] but if you do that, you have to rewrite the config file parser [13:52:05] *** Quits: gila_ (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Read error: Connection reset by peer) [13:52:18] so I'm not opposed to putting this patch in [13:53:08] this can also hit with your new add listener RPC, right? [13:53:20] just by adding the same address twice, no race required [13:54:32] yeah [13:54:53] I have to look at that actually [15:53:25] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [16:28:22] Hi Jim, [16:28:32] about iSCSI poll group, I understood your intention is that we should reduce the frequency per core and concurrency among multiple cores of system call. [16:28:40] Is that correct? [16:28:49] about login poller, one global login poller check all portals? it make sense to me. [16:28:57] Why I proposed a login poller per portal is that it would make easier to add/remove portals dynamically without any conflict to login poller. [16:29:06] If one global login poller is applied, I think the global login poller should have cache of portals and adding/removing portals to/from the cache can be done through spdk message. [16:29:18] hi shuhei - my trello card was focused on poll groups for established connections - not the portals [16:29:26] but we could apply same concept to the portals too [16:30:09] the nvme-of target uses one poller globally and then sends messages to pass the new connections to the thread we want to ultimately process them on [16:30:23] I think that's a reasonable design for iscsi too [16:30:25] for the acceptor pollers, we can reduce polling frequency - for example, look for incoming connections every 1ms [16:30:45] when you add a new portal in nvme-of (called a listener), it just sends a message to every thread in the system [16:30:55] it's ok if that process is slow [16:31:12] I think the acceptor poller already only runs once per ms [16:31:18] (for iSCSI) [16:31:24] it's configurable in the config file too I think [16:31:30] or maybe that's for nvme-of [16:31:34] i was just looking that up [16:31:45] ACCEPT_TIMEOUT_US is hard-coded in lib/iscsi/acceptor.c [16:32:01] NVMe-oF is configurable, though [16:32:35] and the default for the NVMe-oF acceptor poller is once per 10 ms actually [16:33:00] 10ms response time on establishing new connections is plenty snappy in my opinion [16:33:16] it's actually less than that - 10ms is worst case [16:35:22] Hi All, thank you for very helpful quick response. I'll try to change iSCSI like NVMf about the area not covered by Jim, Ziye yet. [16:37:55] Hi Jim, I responded to the 2) in yoyr [16:37:58] yoyu [16:38:04] your trello card. [16:38:32] but I misunderstood that. [16:39:24] I should differentiate each poller's role correctly. [16:39:27] Thanks. [17:42:42] *** Joins: nvme_noob (d05b0202@gateway/web/freenode/ip.208.91.2.2) [17:43:52] Hi everyone.. I am trying to see if I can expose non 4k supported disk as 4k or have Malloc create one with 4k block size.. [17:44:06] couldn't find it in the docs.. appreciate any pointers [17:45:52] this is for nvme over rdma target [17:47:47] I see AIO options wtih block size.. let me give that a try and come back [19:18:29] *** Quits: nvme_noob (d05b0202@gateway/web/freenode/ip.208.91.2.2) (Ping timeout: 260 seconds) [19:49:34] *** Joins: nvme_noob (d05b0202@gateway/web/freenode/ip.208.91.2.2) [19:58:44] *** Quits: nvme_noob (d05b0202@gateway/web/freenode/ip.208.91.2.2) (Ping timeout: 260 seconds) [22:18:23] *** Quits: lhodev (~Adium@inet-hqmc07-o.oracle.com) (Remote host closed the connection) [22:19:59] *** Joins: lhodev (~Adium@66-90-218-190.dyn.grandenetworks.net)