[00:08:51] *** Joins: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) [00:09:13] *** Joins: travis-ci (~travis-ci@ec2-54-226-118-26.compute-1.amazonaws.com) [00:09:14] (spdk/master) iscsi: Fix conflict by destructing connection and logout timeout (Shuhei Matsumoto) [00:09:14] Diff URL: https://github.com/spdk/spdk/compare/1875912ff071...47d5ddb7ce02 [00:09:14] *** Parts: travis-ci (~travis-ci@ec2-54-226-118-26.compute-1.amazonaws.com) () [00:47:15] *** Quits: zhouhui (~wzh@114.255.44.139) (Ping timeout: 268 seconds) [00:48:03] *** Joins: zhouhui (~wzh@114.255.44.139) [01:58:06] *** Joins: tomzawadzki (uid327004@gateway/web/irccloud.com/x-esswuhsmmctvsiwa) [03:12:04] *** Quits: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) (Quit: Textual IRC Client: www.textualapp.com) [04:09:42] *** Quits: tomzawadzki (uid327004@gateway/web/irccloud.com/x-esswuhsmmctvsiwa) (Quit: Connection closed for inactivity) [04:28:34] *** Joins: travis-ci (~travis-ci@ec2-54-242-147-107.compute-1.amazonaws.com) [04:28:35] (spdk/master) thread: Rename spdk_free_thread to spdk_thread_exit (Ben Walker) [04:28:35] Diff URL: https://github.com/spdk/spdk/compare/47d5ddb7ce02...9cba82b95551 [04:28:35] *** Parts: travis-ci (~travis-ci@ec2-54-242-147-107.compute-1.amazonaws.com) () [05:25:29] Hi, a simple and naive question. Sometimes people say "nvmf host", but sometimes "nvmf initiator", so "nvmf host" and "nvmf initiator" is the same concept? [07:23:32] zhouhui, yup, host and initiator are both used. It's the system consuming the storage, the target is the one providing the storage [08:58:43] I had a patch fail in CI with the following.... seems like maybe the test system needs reboot? [08:58:45] "## ERROR: requested 4096 hugepages but only 3556 could be allocated. [08:58:45] ## Memory might be heavily fragmented. Please try flushing the system cache, or reboot the machine. [08:58:46] 10:53:01 # trap - ERR [08:58:47] 10:53:01 # print_backtrace [08:58:48] 10:53:01 # [[ ehxBE =~ e ]] [08:58:49] 10:53:01 # local shell_options=ehxBE [08:58:51] 10:53:01 # set +x [08:58:53] ========== Backtrace start: ==========" [09:16:08] *** Joins: sethhowe (~sethhowe@134.134.139.72) [09:35:31] FYI 2nd run worked fine [10:00:13] peluse: yeah - i've seen that happen before, especially if the test system/VM has run a bunch of tests in a row without failure [10:00:33] cool, it auto-rebooted too so that's good [10:17:19] *** Joins: travis-ci (~travis-ci@ec2-54-90-129-245.compute-1.amazonaws.com) [10:17:20] (spdk/master) nvme_perf: Relocate functions only for NVMe to introduce abstraction (Shuhei Matsumoto) [10:17:20] Diff URL: https://github.com/spdk/spdk/compare/9cba82b95551...ff3c2e3c846e [10:17:20] *** Parts: travis-ci (~travis-ci@ec2-54-90-129-245.compute-1.amazonaws.com) () [10:39:03] *** Joins: KipIngram (~kipingram@185.149.90.58) [10:40:24] Morning gents. I'm running fio with the plugin against spdk. When I assess latency, I presume fio is starting a time when it hands each operation off to spkd, and stopping it when it receives a completion notification or result back from spdk. Does spdk tell me anything about the latency associated with its own operation? [10:40:48] If I could gain knowledge about that, I could get closer to a measurement of just the actual target. [11:07:37] @KipIngram, are you using the fio_plugin for the bdev layer, or for NVMe? [11:16:32] @KipIngram, If you are going against the fio_plugin that submits I/O to the generic bdev layer (examples/bdev/fio_plugin.c), you may find the get_bdevs_iostat rpc to be useful. It dumps some relevant information about cumulative latency and number of operations. [11:35:33] I use this one: [11:35:35] LD_PRELOAD=/home/kingram/tools/spdk/examples/nvme/fio_plugin/fio_plugin [11:35:59] But I could change that if there's a better alternative. [12:42:34] *** Joins: travis-ci (~travis-ci@ec2-54-196-183-206.compute-1.amazonaws.com) [12:42:35] (spdk/master) bdev/qos: add the function pointers for qos operations (GangCao) [12:42:35] Diff URL: https://github.com/spdk/spdk/compare/ff3c2e3c846e...cd4dd43ab8a2 [12:42:35] *** Parts: travis-ci (~travis-ci@ec2-54-196-183-206.compute-1.amazonaws.com) () [12:48:40] *** Joins: travis-ci (~travis-ci@ec2-54-196-183-206.compute-1.amazonaws.com) [12:48:41] (spdk/master) reduce: mark correct number of backing pages for md (Jim Harris) [12:48:41] Diff URL: https://github.com/spdk/spdk/compare/e28605f47ab0...1fa0283f31a3 [12:48:41] *** Parts: travis-ci (~travis-ci@ec2-54-196-183-206.compute-1.amazonaws.com) () [13:37:06] *** Quits: darsto (~darsto@89-78-174-111.dynamic.chello.pl) (Ping timeout: 246 seconds) [13:37:24] *** Joins: darsto (~darsto@89-78-174-111.dynamic.chello.pl) [14:03:14] *** Joins: travis-ci (~travis-ci@ec2-3-84-4-189.compute-1.amazonaws.com) [14:03:15] (spdk/master) ftl: Restore state from the SSD (Wojciech Malikowski) [14:03:15] Diff URL: https://github.com/spdk/spdk/compare/1fa0283f31a3...5c8f369a4100 [14:03:15] *** Parts: travis-ci (~travis-ci@ec2-3-84-4-189.compute-1.amazonaws.com) () [14:06:37] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Ping timeout: 246 seconds) [14:09:23] *** Joins: travis-ci (~travis-ci@ec2-54-80-181-18.compute-1.amazonaws.com) [14:09:23] (spdk/master) nvme: report SQ deletion code to outstanding admin requests (Changpeng Liu) [14:09:23] Diff URL: https://github.com/spdk/spdk/compare/5c8f369a4100...d9e865a8852b [14:09:23] *** Parts: travis-ci (~travis-ci@ec2-54-80-181-18.compute-1.amazonaws.com) () [14:18:52] KipIngram: you will get slightly different results from the two as one is going through our bdev layer and the other is going directly to the NVMe driver, but the bdev layer does provide the extra timing information that you are looking for. [14:35:25] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:37:27] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:37:59] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:40:01] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:40:32] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:42:34] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:43:05] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:45:06] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:45:38] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:47:39] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:51:32] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [14:53:33] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Remote host closed the connection) [14:54:04] *** Joins: lhodev (~lhodev@inet-hqmc06-o.oracle.com) [15:17:45] jimharris, FYI the ISAL patch is failing RBD test because of ISAL being linked... not sure why yet but finally can repro and make it go away simply by building --without-isal. More later.... [15:18:35] oh no [15:18:57] maybe ping tushar to help debug? [15:20:37] ya, have a call at 3:30 and will spend a little time on it myself then will reach out for sure. Wondering if its a naming conflict somewhere, didn't ziye mention that recently? [15:33:41] i don't remember ziye mentioning that but please don't rely on my memory :) [15:42:44] *** Quits: lhodev (~lhodev@inet-hqmc06-o.oracle.com) (Quit: My MacBook has gone to sleep. ZZZzzz…) [15:44:22] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [17:22:18] jimharris, heh. So I did hunt down the version of isa-l in the distro install I'm using and is pretty far off from what we're using and ceph does use isa-l and we're installing to system dirs so that seems like it could be a problem, talked to greg about it too [17:22:47] I've tried matching the versions and that didn't seem to help but a better test might be to install our ISA-L somewhere else, I dunno. Will try that next [17:23:32] I did find the raods call that hangs... rados_connect() and looking briefly at the code didn't see any smoking gun, a rnd number generator and a few other functions calls that I didn't hunt down yet [18:40:02] *** Joins: travis-ci (~travis-ci@ec2-54-145-218-250.compute-1.amazonaws.com) [18:40:03] (spdk/master) nvmf/tcp: dump the req state of the tqpair (Ziye Yang) [18:40:03] Diff URL: https://github.com/spdk/spdk/compare/d9e865a8852b...b62a1f9ef168 [18:40:03] *** Parts: travis-ci (~travis-ci@ec2-54-145-218-250.compute-1.amazonaws.com) () [23:25:36] *** Joins: ziyeyang_ (~ziyeyang@192.55.46.46)