[02:54:38] *** Quits: sherlock1122_ (~sherlock1@106.38.14.2) (Remote host closed the connection) [03:21:04] *** Quits: tomzawadzki (~tomzawadz@134.134.139.76) (Ping timeout: 240 seconds) [04:05:14] *** Joins: tomzawadzki (~tomzawadz@192.55.54.44) [05:02:54] *** Quits: tomzawadzki (~tomzawadz@192.55.54.44) (Ping timeout: 252 seconds) [06:49:15] *** Joins: tomzawadzki (~tomzawadz@192.55.54.44) [07:07:17] bwalker: about your thread patches. How to discover lcore<->thread relationship without using spdk_event_call()/spdk_event_call() [08:00:47] FYI I can't get into my WebEx account, still trying... [08:13:24] pwodkowx: Remove all notions of cores from the code [08:13:35] anywhere you would have used a core, use a thread [08:13:54] it has mostly the same properties [08:14:54] you can store a pointer to a thread instead of an lcore id in a lot of places [08:17:35] bwalker: yes, for bdevs it will be fine bo for vhost eg. I need to start poller on particular core according to provided CPU mask [08:18:34] from a thread safety standpoint, you never need to start something on a particular core. Only a particular thread [08:18:37] how otherwise choose (obtain) thread pointer? [08:18:57] but is the user providing this CPU mask? [08:19:57] if it's a user configuration thing, we may just have to think a bit about how to rework this [08:21:38] historically we've let users assign some functionality to run on a specific core, as part of performance tuning [08:21:59] it's less clear what we should do when the threads are entirely dynamic [08:22:36] my long term vision, right now, is that the applications will spawn 1 thread per core in the event framework. Those "native" threads will then go to sleep [08:23:05] then the spdk_threads will get automatically load balanced across those native threads as they spin up and down [08:24:00] so certainly any code in app/ or in lib/event/ knows about cores [08:24:04] but the rest of the libraries in lib/ don't [08:25:31] eeee... look like grate idea but what about NUMA? [08:26:01] yeah NUMA is one big open question [08:26:09] we want use cores on the same NUMA when polling particular VM [08:26:31] no code has been written to do this yet - still working out details [08:26:32] can we have some spdk_thread_from_lcore() function ? [08:27:02] the issue with that is that the core that a thread is running on can change [08:27:32] for instance, there really are cases where you want to collapse all of the spdk_threads onto a single native thread, regardless of NUMA [08:27:39] if you're mostly idle, say [08:29:36] so I'm open to all suggestions at this point - purely in the design phase [08:35:06] I would vote for providing "pining" API that, if user explicitly specify "allowed-core-mask" we should obey this [08:35:36] I think that's reasonable - like the kernel set affinity stuff [08:35:40] so if an application wants to mask off certain cores, they can do that [08:35:50] yes [08:36:07] can probably do both force to a specific core and force to a whole NUMA node [08:49:15] maybe silly question why you didn't reuse pthread API in your patch serie? [08:58:22] *** Quits: guerby (~guerby@april/board/guerby) (Remote host closed the connection) [09:00:55] *** Joins: guerby (~guerby@april/board/guerby) [09:10:28] question: when we do a save_config RPC is there any way to specify an order for the methods? Or on load I guess, I have a basic case now were I'm saving a config with a bdev and a vbdev and when loading the vbdev construct method is listed first so failing because the bdev isn't there yet [09:11:31] *** Quits: gila (~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl) (Quit: Textual IRC Client: www.textualapp.com) [09:35:18] pwodkowx: I'm not sure what you mean - like reimplement pthread_create and such? [10:01:57] *** Joins: travis-ci (~travis-ci@ec2-54-92-212-55.compute-1.amazonaws.com) [10:01:57] (spdk/master) thread: Add debug logging (Ben Walker) [10:01:57] Diff URL: https://github.com/spdk/spdk/compare/828008f184d0...ec571793d466 [10:01:57] *** Parts: travis-ci (~travis-ci@ec2-54-92-212-55.compute-1.amazonaws.com) () [10:17:06] *** Quits: tomzawadzki (~tomzawadz@192.55.54.44) (Ping timeout: 246 seconds) [10:30:55] peluse: vbdev modules typically just create an internal context on a construct RPC call - then during examine it looks at that list to see if it should act on the new bdev [10:36:18] pwodkowx: can you take a look at https://review.gerrithub.io/#/c/spdk/spdk/+/424584/? [11:17:43] jimharris, can you point me to a vbdev module that does that? I'll fixup the PT module to do it as well once I get the crypto fixed up [11:18:10] lib/bdev/split [11:19:21] vbdev_split_init() parses the config file but vbdev_split_add_config() just puts an entry in g_split_config TAILQ (it doesn't look to see if the base bdev exists) [11:20:12] when vbdev_split_examine() calls vbdev_split_config_find_by_base_name() to see if the newly registered bdev should be split or not [11:20:15] when => then [11:49:58] *** Joins: gila (~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl) [11:53:06] *** Joins: travis-ci (~travis-ci@ec2-54-92-212-55.compute-1.amazonaws.com) [11:53:07] (spdk/master) autopackage: add ipsec submodule to autopackage (Paul Luse) [11:53:07] Diff URL: https://github.com/spdk/spdk/compare/0fd41a7c34b9...3e26af2a0bbf [11:53:07] *** Parts: travis-ci (~travis-ci@ec2-54-92-212-55.compute-1.amazonaws.com) () [15:24:08] *** Joins: travis-ci (~travis-ci@ec2-54-198-138-119.compute-1.amazonaws.com) [15:24:09] (spdk/master) rpc: g_rpc_lock_path: remove redundant plus (wuzhouhui) [15:24:10] Diff URL: https://github.com/spdk/spdk/compare/74ebeda46160...311e0005e50f [15:24:10] *** Parts: travis-ci (~travis-ci@ec2-54-198-138-119.compute-1.amazonaws.com) () [16:37:35] *** Joins: travis-ci (~travis-ci@ec2-54-198-138-119.compute-1.amazonaws.com) [16:37:35] (spdk/master) bdev/lvol: fix error path of spdk_rpc_get_lvol_stores() (wuzhouhui) [16:37:35] Diff URL: https://github.com/spdk/spdk/compare/cee0fef13832...106218144363 [16:37:35] *** Parts: travis-ci (~travis-ci@ec2-54-198-138-119.compute-1.amazonaws.com) () [19:40:20] *** Joins: sherlock1122_ (~sherlock1@61.148.245.159) [19:45:23] *** Quits: drv (daniel@oak.drv.nu) (Ping timeout: 260 seconds) [19:46:14] *** Joins: drv (daniel@oak.drv.nu) [19:46:14] *** ChanServ sets mode: +o drv [19:50:01] *** Quits: sherlock1122_ (~sherlock1@61.148.245.159) (Ping timeout: 260 seconds) [20:35:27] *** Quits: guerby (~guerby@april/board/guerby) (Remote host closed the connection) [20:38:14] *** Joins: guerby (~guerby@april/board/guerby) [20:42:29] *** Quits: guerby (~guerby@april/board/guerby) (Read error: Connection reset by peer) [20:45:18] *** Joins: guerby (~guerby@april/board/guerby) [20:47:26] *** Quits: guerby (~guerby@april/board/guerby) (Excess Flood) [20:50:19] *** Joins: guerby (~guerby@april/board/guerby) [20:53:00] *** Quits: guerby (~guerby@april/board/guerby) (Excess Flood) [20:56:22] *** Joins: guerby (~guerby@april/board/guerby) [20:58:24] *** Quits: guerby (~guerby@april/board/guerby) (Excess Flood) [21:00:55] *** Joins: guerby (~guerby@april/board/guerby) [21:41:47] *** Quits: guerby (~guerby@april/board/guerby) (Ping timeout: 252 seconds) [21:47:20] *** Joins: guerby (~guerby@april/board/guerby) [21:49:51] *** Quits: guerby (~guerby@april/board/guerby) (Excess Flood) [21:52:22] *** Joins: guerby (~guerby@april/board/guerby) [22:24:39] *** Quits: guerby (~guerby@april/board/guerby) (Ping timeout: 252 seconds) [22:27:10] *** Joins: guerby (~guerby@april/board/guerby) [22:34:44] *** Quits: guerby (~guerby@april/board/guerby) (Excess Flood) [22:37:16] *** Joins: guerby (~guerby@april/board/guerby)