[00:13:09] (03PS1) 10Awight: [DNM] Update to assets with an LFS file [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419642 (https://phabricator.wikimedia.org/T180627) [01:05:47] (03PS2) 10Ladsgroup: Build a system that allows deleting old scores when new ones have arrived [extensions/ORES] - 10https://gerrit.wikimedia.org/r/418877 (https://phabricator.wikimedia.org/T166427) [01:21:02] (03CR) 10jerkins-bot: [V: 04-1] Build a system that allows deleting old scores when new ones have arrived [extensions/ORES] - 10https://gerrit.wikimedia.org/r/418877 (https://phabricator.wikimedia.org/T166427) (owner: 10Ladsgroup) [01:35:22] (03CR) 10jerkins-bot: [V: 04-1] Build a system that allows deleting old scores when new ones have arrived [extensions/ORES] - 10https://gerrit.wikimedia.org/r/418877 (https://phabricator.wikimedia.org/T166427) (owner: 10Ladsgroup) [07:39:51] 10Scoring-platform-team (Current), 10Packaging, 10Patch-For-Review: Package word2vec binaries - https://phabricator.wikimedia.org/T188446#4052453 (10akosiaris) >>! In T188446#4050548, @awight wrote: > I see we have git-lfs on ores*.eqiad.wmnet, so we're almost ready to give this a try. > > What I'm missing... [09:31:44] (03CR) 10Ladsgroup: "recheck" [extensions/ORES] - 10https://gerrit.wikimedia.org/r/418877 (https://phabricator.wikimedia.org/T166427) (owner: 10Ladsgroup) [09:50:21] 10Scoring-platform-team (Current), 10articlequality-modeling, 10User-Ladsgroup, 10artificial-intelligence: Article quality campaign for Persian Wikipedia - https://phabricator.wikimedia.org/T174684#4052611 (10Ladsgroup) Announced it in Persian Wikipedia. Hopefully we will have it done soon. [13:36:09] o/ [14:02:57] \o/ Amir1. I'll be checking the fawiki campaign with great interest :) [14:03:14] 59 done :) [14:03:23] Only 289 to go :D [14:04:45] halfak: Just announced it and we have some passionate users :D [14:05:12] That's great! I'm looking forward to modeling some other wikis off of what we figured out for fawiki [14:05:15] halfak: btw. the crypto currency thing was brought up in the SoS, fawiki is (in)famous now [14:05:24] * halfak is not aware of this [14:05:33] fawiki on the blockchain? [14:05:43] https://lists.wikimedia.org/pipermail/wikitech-l/2018-March/089636.html [14:05:50] I was talking about this yesterday :D [14:07:51] Oh! Someone put a miner in JS on the wiki? [14:08:11] I must have been spacing out. Sorry :( [14:08:56] yup [14:09:27] It actually occupied most of day for the yesterday (in volunteer capacity) [14:09:52] I reported it, removed rights, oversighted things [14:09:53] etc. [14:13:38] GOod job Amir1 :) [14:13:57] How did you find it? [14:15:01] Some admin was checking the wiki and realized it's using too much resources. Checked the recent changes and found it [14:16:52] Did ORES flag it by any chance? [14:16:58] I know it's not in ns zero [14:17:01] Just wondering. [14:17:19] nope, the user was old had the right to change mediawiki:common.js [14:17:23] he wasn [14:17:36] I wonder if we could make a model that looks at edits to MediaWiki namespace and flags questionable edits. [14:17:36] he wasn't an admin though, "template editor" [14:17:46] I have to reboot the oresrdb hosts [14:17:48] Given that JS is generally not language specific. [14:17:58] akosiaris, right now? I'm available [14:18:06] not necessarily [14:18:13] but we can do it now if it suits you [14:18:32] problem is, this is going to cause a downtime. Cause celery and twemproxy and MULTI and all that [14:18:34] :-( [14:18:51] Understood. Will you write up the incident? :P [14:19:43] sure. "I rebooted the redis host, celery could not submit new jobs, scores errored. Actionables: Find a way to make redis HA" [14:19:52] there.. ready beforehand [14:19:59] can't blame us for not being proactive :D [14:20:34] +1 [14:20:59] ok I 'll do so in 5 [14:21:03] I 'll start with codfw first [14:21:06] OK I've got grafana up and I'm ready for thinking through any surprises. [14:21:43] 10Scoring-platform-team, 10Wikilabels, 10editquality-modeling, 10artificial-intelligence: Complete Latvian Wikipedia editquality campaign - https://phabricator.wikimedia.org/T163005#4053187 (10Papuass) We have completed the work. There are 5 labels remaining, but these return "revision not found" or "Missi... [14:22:10] 10Scoring-platform-team (Current), 10ORES, 10Operations: Reboot oresrdb - https://phabricator.wikimedia.org/T189781#4053189 (10Halfak) [14:22:16] if anything, we will have a solid idea of how fast ores recovers from a redis outage [14:22:54] 10Scoring-platform-team, 10Wikilabels, 10editquality-modeling, 10artificial-intelligence: Complete Latvian Wikipedia editquality campaign - https://phabricator.wikimedia.org/T163005#4053203 (10Halfak) Wow! nice work. Nevermind those. Regretfully, we don't have any good options for deleted content. We'l... [14:23:30] 10Scoring-platform-team (Current), 10Wikilabels, 10editquality-modeling, 10artificial-intelligence: Complete Latvian Wikipedia editquality campaign - https://phabricator.wikimedia.org/T163005#4053204 (10Halfak) [14:24:02] 10Scoring-platform-team (Current), 10editquality-modeling, 10artificial-intelligence: Train/test damaging & goodfaith models for Latvian Wikipedia - https://phabricator.wikimedia.org/T163006#4053209 (10Halfak) [14:25:04] * halfak F5 bombs ORES [14:25:27] still doing codfw btw [14:26:42] kk [14:27:11] oresrdb2001 has slightly higher CPU. I imagine that's coming back online? [14:27:41] maybe unrelated... [14:27:44] Only 7% [14:28:18] probably [14:30:14] There we go. Big spike on oresrdb1002 [14:30:46] yup, syncing from oresrdb1001 [14:32:01] FWIW, no failed requests to ORES so far [14:32:10] yeah it's the slaves I did up to now [14:32:22] Gotcha [14:33:39] it's taking its sweet time to catch up [14:34:08] ok both slaves caught up to the masters... and now let's do oresrdb2002 [14:34:09] er [14:34:14] oresrdb2001* [14:34:40] First error [14:34:45] Error 111 connecting to oresrdb.svc.codfw.wmnet:6379. Connection refused. [14:34:58] and there we are [14:35:05] ConnectionError(self._error_message(e))\nredis.exceptions.ConnectionError: Error 111 connecting to oresrdb.svc.codfw.wmnet:6379. Connection refused.\ [14:35:17] Redis is loading the dataset in memory [14:35:30] eqiad btw is going to be worse than codfw [14:35:40] "redis.exceptions.BusyLoadingError: Redis is loading the dataset in memory" [14:35:45] both because of the traffic and nature of the oresrdb hosts [14:37:09] loading_loaded_perc:84.50 [14:37:31] halfak: just to say for what I did yesterday, I applied what you asked me for using revision parent in recent changes [14:37:31] should be up and running now [14:37:42] Back online [14:37:47] Confirmed [14:38:11] let's see how bad this was [14:38:15] Amir1, oh great. how did that work? [14:38:23] just fine [14:38:32] the patch is up for review :D [14:38:44] :) link handy? [14:38:58] akosiaris, are we clear? [14:39:07] redis wise @codfw ? yes [14:39:19] OK. Cool. Still need to do eqiad? [14:39:25] it's eqiad I should do, but still gauging the effect in codfw [14:39:32] OK gotcha [14:39:40] graphs point out it's recoving already [14:40:13] PROBLEM - https://grafana.wikimedia.org/dashboard/db/ores grafana alert on einsteinium is CRITICAL: CRITICAL: https://grafana.wikimedia.org/dashboard/db/ores is alerting: 5xx rate (Change prop) alert. [14:40:17] lol :P [14:40:17] roughly 6k queries ? [14:40:31] that's what I calculate we were unable to serve [14:40:50] 4 mins of downtime * ~1500 scores/minute ? [14:41:43] Note that much of that is precaching. Now real usage [14:42:09] Externally, we missed out on 60 scores/minute [14:42:26] MediaWiki got a bit behind though. hard to measure usage of ORES highlighting there [14:42:30] It counts as precaching [14:42:49] indeed [14:43:00] anyway everything seems fine, I am doing eqiad now [14:43:04] this is going to last longer btw [14:43:30] btw we could set twemproxy at least for the cache [14:43:41] I guess that would help, right ? [14:44:16] akosiaris, yes. It should be able to serve 80% of requests even if celery is jammed. [14:44:33] I'm guessing I can't f5 bomb eqiad since I get routed to codfw. [14:44:55] and here we go. rebooting oresrdb1001 [14:45:50] Hmm. Seems like we're already down? [14:46:01] * halfak tries to ping from stat1005.eqiad.wmnet [14:47:01] Confirmed. getting 500 error [14:47:03] PROBLEM - ORES worker production on ores.wikimedia.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 INTERNAL SERVER ERROR - 2971 bytes in 0.669 second response time [14:47:14] icinga at least alerted. that's good [14:47:42] loading_loaded_perc:16.95 [14:47:48] box is up and loading the cache [14:47:59] the queue should be already accepting stuff [14:48:09] Still 500 [14:48:29] loading_loaded_perc:43.71 [14:48:29] loading_eta_seconds:1 [14:48:30] lol [14:48:37] that's clearly wrong... no way the ETA is 1 sec [14:49:17] FYI that's output from the info persistence command [14:49:17] Still 500 [14:49:26] loading_loaded_perc:82.98 [14:49:30] getting close now [14:49:55] up [14:50:03] confirmed [14:50:03] RECOVERY - ORES worker production on ores.wikimedia.org is OK: HTTP OK: HTTP/1.1 200 OK - 825 bytes in 1.026 second response time [14:50:04] Confirmed [14:50:07] woo [14:50:21] * halfak pats icinga-wm [14:50:34] so some 3 minutes of downtime that icinga noted, plus a few before it actually alerted [14:51:18] I'm going to stop monitoring now. [14:51:22] Safe akosiaris ? [14:51:27] yup [14:51:29] cool [14:51:31] Thanks [14:51:34] thanks as well! [14:53:18] yeah some 5 mins of total downtime and ~2500 requests not served and ~250 requests from external [14:53:27] external sources* [14:53:48] rather ok-ish. If we manage to do the split and use twemproxy that should become even easier [14:57:43] +1. We ran into some issue with that in celery during the last test. [14:57:52] We've been meaning to do a celery upgrade for a while. [14:58:03] So maybe we can get both problems solved at once. [14:58:20] I'm going to step away for a bit. Should be back in 30 mins [15:16:14] RECOVERY - https://grafana.wikimedia.org/dashboard/db/ores grafana alert on einsteinium is OK: OK: https://grafana.wikimedia.org/dashboard/db/ores is not alerting. [15:41:21] (03PS1) 10Awight: [DNM] Configure scap to do git-lfs [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419759 (https://phabricator.wikimedia.org/T180627) [15:45:41] 10Scoring-platform-team (Current), 10Packaging, 10Patch-For-Review: Package word2vec binaries - https://phabricator.wikimedia.org/T188446#4053386 (10awight) >>! In T188446#4052453, @akosiaris wrote: > Nice. Per T180628#4045978 let's do some testing first in labs/beta before we start testing directly in produ... [15:48:00] o/ awight [15:48:07] hey [15:48:14] Thanks for picking up the git-lfs stuff this AM :) [15:48:52] yeah I made some progress last night, too. Nothing overly positive yet, but I'll mention any change as it goes along. [15:48:59] We're closer :) [15:49:20] btw, awight, it looks like you fixed the template but didn't re-generate the Makefile in https://github.com/wiki-ai/editquality/pull/141 [15:49:46] I'm ready to merge and then work on an identical PR for arwiki :) [15:50:16] "Load diff", the makefile change is too big to show [15:50:23] Right. I did [15:50:49] Which lines look wrong? [15:51:01] See lines 505 and 506 in the diff [15:51:07] Of Makefile [15:51:34] aha, ok I need to tweak the template some more. [15:53:53] It's my turn to bring my broken bike to the shop today. I'll be AFK in 10 minutes to do that. [15:54:08] Anything you need before I run away, awight? [15:54:22] o/ good luck [16:19:03] OK actually leaving now. Back in ~1 hour [16:21:22] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4053513 (10awight) p:05Triage>03Normal [16:30:09] Amir1: Do you have a minute to reimage deployment-ores01? [16:30:51] awight: yup [16:31:17] Amir1: Cool, thanks! I need it reimaged as Stretch, so I can install git-lfs. [16:31:59] sure [16:33:19] ty, I can't do the 2FA for wikitech today [16:43:28] awight: It should be up, can you take a look [16:45:00] Amir1: It's up, but probably still running puppet. I'll watch. [16:49:55] cool, let me know If I can help :) [16:56:56] aww nuts, the puppet roles have changed [17:00:33] the role::ores::{worker,web} need to be replaced by "role::ores" [17:01:15] Strange, horizon changes aren't being reflected in the openstack tool [17:06:42] got that cleaned up, rebooting to get a puppet run [17:07:57] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4053788 (10Sumit) [17:08:14] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4053800 (10Sumit) [17:15:51] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4053829 (10Sumit) [17:27:42] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4053901 (10awight) Ran into some puppet unpleasantness: ``` Could not find data item profile::ores::web::workers in any Hiera data file and no default supplied at /etc/puppet/modules/profile/manife... [17:34:19] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4053934 (10Paladox) @awight you could either add the ores hiera values to the wiki page for that paject (ie i think deployment-prep) or add them through hiera. [17:43:34] halfak: your function approach with word vectors works but i'm thinking which is better, in one we load and pass and vectors and let the script take care of them, in other user will have to define a wrapper function everytime vectors are being loaded [17:45:59] Right. I think the wrapper function is a good option because we don't have to know things about the global vector. But I agree that it's slightly less intuitive. For an example of another place we pass a function, see the dictionary-based features in language. [17:46:06] I think our stemmer takes a function too. [17:46:18] It's pretty easy to define one with our helper method :) [17:46:29] codezee, ^ [17:47:32] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4054025 (10awight) @Paladox Good idea. For the record, I used the following values, and puppet seems healthy so far. ``` { "profile::ores::celery::workers": 4, "profile::ores::celery::queue_maxsiz... [17:48:40] halfak: but isn't having vectors in the script itself internal business so that we have a clean API where we simply pass in loaded vectors to the datasource rather than a function, might be easier to change behaviour in future? [17:49:43] Hmm. Not clear to me that this would be better or easier to change behavior either way. [17:51:08] halfak: ok the revscoring PR looks good then, just wanted to put my point, I'll change the drafttopic PR to reflect that [17:51:30] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4054036 (10Sumit) [17:52:15] OK +1 codezee [17:54:17] done! [17:55:27] halfak: I'm tracking final steps for drafttopic in - https://phabricator.wikimedia.org/T189797 and i've submitted 3 PRs to that end, breaking up work into logical chunks [17:56:16] only one thing remains - the resolution of text fetching/feature extraction scripts - revscoring scripts in its current state cannot extract drafttopic stuff, so thats missing [17:56:26] *in their current [18:02:45] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4054051 (10awight) Puppet ran once, then broke itself: ``` Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [self signed... [18:03:00] 10Scoring-platform-team (Current): Investigate word2vec memory issues with multiprocessing - https://phabricator.wikimedia.org/T189364#4054052 (10Sumit) Final resolution done by using a wrapper function - https://github.com/wiki-ai/revscoring/pull/394 [18:03:36] codezee, why can't they extract? [18:03:38] I'm confused. [18:11:59] when i use fetch_text it tries to access some deleted history - https://github.com/wiki-ai/revscoring/blob/master/revscoring/utilities/fetch_text.py [18:12:09] and gives permission denied [18:12:18] I see! [18:24:54] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4054141 (10Sumit) [18:25:12] halfak: but no worries, we can use revscoring extract to extract features directly bypassing text fetching [18:25:19] so that automation is in place... [18:26:35] 10Scoring-platform-team (Current), 10drafttopic-modeling: Checklist for drafttopic repo - https://phabricator.wikimedia.org/T189797#4053788 (10Sumit) The recommended order for review should be - 18, 20, 19 [18:30:49] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4054168 (10awight) I've rebuilt the puppet certs. Now there's a conflict between redis and prometheus-redis [18:31:02] and that brings us another step closer to completing drafttopic :) [18:34:21] :) [18:45:17] 10Scoring-platform-team, 10ORES: Reimage deployment-ores01 as Stretch - https://phabricator.wikimedia.org/T189790#4054234 (10awight) 05Open>03Resolved [19:02:42] codezee, I'd be interested in reviewing a PR to revscoring that gives the option to log in (and look for deleted pages) or not. [19:03:00] Anyway, I think directly using extract is a fine option. [19:04:40] halfak: i'm not saying that deleted pages lookup should stop, what i'm saying is we can refactor fetch_text to address both use cases - extracting pages with/without deleted lookup [19:04:47] i'll see if i can look into it [19:04:56] +1. An option would make sense to me. [19:05:14] this way it can handle generalized cases and used by anyone interested in just fetching text [19:05:19] codezee, keep it lower priority unless it sounds exciting :D [19:05:38] yeah, first in line is merging those 3 little PRs :D [19:19:38] +1 Am in meetings for a while but will review today :) [19:31:37] (03PS1) 10Awight: Merge stretch_conversion into master [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419832 [19:32:15] (03CR) 10Awight: [V: 032 C: 032] "Clean up Stretch work, move into master." [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419832 (owner: 10Awight) [19:49:49] Amir1: I'm trying to stand deployment-ores01 up again, and wondering how we were handling the firewall. [19:50:06] The default doesn't open port 8081, but we would need that for the web proxy to work. [19:50:10] awight: security groups [19:50:22] and then DNS proxies [19:50:23] Did you create a security group for this? [19:50:38] IIRC yes [19:50:49] otherwise the old thing wouldn't work [19:52:40] Grr, I don't see 8081 in any of the existing groups, and we're out of quota to create a new group. [19:52:47] I'll just open the port in "default" [19:53:33] lol. {done} [19:55:48] halfak: Wanna kick this? https://gerrit.wikimedia.org/r/419832 [20:04:06] hungry. [20:06:06] Hey. Just got out of the ACTRIAL brown bag :) [20:06:28] (03CR) 10Halfak: [V: 032 C: 032] Merge stretch_conversion into master [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419832 (owner: 10Awight) [20:24:04] wiki-ai/editquality#178 (revscoring-2.2 - e833450 : Aaron Halfaker): The build passed. https://travis-ci.org/wiki-ai/editquality/builds/354023811 [20:51:15] I'm at another impasse with the git-lfs work, but it's a new impasse :) [20:52:40] tl;dr, we learned that "git-lfs in a submodule" will take more scap development. [20:53:26] oh crap [20:53:33] hmm? [20:53:39] How much more dev? [20:54:09] For this iteration, probably a small amount of dev, but I have no idea what the schedule will be. [20:54:29] And–yeah this is exactly where we didn't want to be, blocked on deployment toolchain fixes [20:54:46] I liked my absolutely savage .deb packaging [20:54:54] haha [20:55:07] Good that we're getting git-lfs squared away though [20:55:19] +1 [20:55:42] when you say "oh crap", does that mean we're missing timetables for drafttopic deployment? [20:57:06] yeah, but I'm less concerned about that than just having another halt of progress. [20:57:09] Halting progress is hard. [21:01:37] True. It's going to be very halting for a bit. [21:02:44] Once we're past the submodule thing, there's a nasty detail about n [21:02:59] about *not* rewriting git URLs when the repo is using LFS or something. [21:03:08] everyone's on the same page: it's going to be fun [21:09:24] (03PS1) 10Awight: Update default thresholds to new syntax [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419856 (https://phabricator.wikimedia.org/T181159) [21:09:27] (03PS1) 10Awight: Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) [21:14:47] (03CR) 10jerkins-bot: [V: 04-1] Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) (owner: 10Awight) [21:15:35] (03CR) 10Awight: "recheck" [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) (owner: 10Awight) [21:19:18] (03CR) 10jerkins-bot: [V: 04-1] Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) (owner: 10Awight) [21:34:24] (03PS2) 10Awight: Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) [21:42:59] (03CR) 10jerkins-bot: [V: 04-1] Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) (owner: 10Awight) [21:45:18] (03PS3) 10Awight: Remove old thresholds syntax parser [extensions/ORES] - 10https://gerrit.wikimedia.org/r/419857 (https://phabricator.wikimedia.org/T181159) [21:50:56] (03PS1) 10Awight: Use gerrit for assets submodule [services/ores/deploy] - 10https://gerrit.wikimedia.org/r/419878 [21:59:29] 10Scoring-platform-team (Current), 10Wikilabels, 10editquality-modeling, 10User-Ladsgroup, 10artificial-intelligence: Re. init enwiktionary reverted model & labeling campaign - https://phabricator.wikimedia.org/T188271#4054804 (10Halfak) a:05Ladsgroup>03Halfak [21:59:42] wiki-ai/wikilabels#323 (message-formatting-fix - 6941498 : Aaron Halfaker): The build failed. https://travis-ci.org/wiki-ai/wikilabels/builds/354066043 [22:00:13] wiki-ai/wikilabels#325 (master - eff6b02 : Aaron Halfaker): The build was broken. https://travis-ci.org/wiki-ai/wikilabels/builds/354066485 [22:00:26] 10Scoring-platform-team (Current), 10Wikilabels, 10editquality-modeling, 10User-Ladsgroup, 10artificial-intelligence: Re. init enwiktionary reverted model & labeling campaign - https://phabricator.wikimedia.org/T188271#4001950 (10Halfak) I've reworked the PR to use 92k revisions. I've also re-loaded the... [22:00:57] OK I finally have enwiktionary cleaned up. That was a pain in the butt. ...speaking of halting progress! [22:01:32] Need some CR? [22:02:27] yeah! Don't merge, but give me an LGTM on https://github.com/wiki-ai/editquality/pull/136 [22:02:41] I want to add a new model, but there'll be nothing to review in that model file. [22:08:29] halfak: Needs a rebase [22:08:34] Ahh thanks [22:13:05] ARG so many conflicts [22:13:07] >:( [22:16:07] Our makefile generator likes to make inconsequential changes to floating points [22:16:13] Which causes huge diffs :| [22:16:20] Maybe we should manually round them. [22:16:54] awight, should be good for re-review [22:22:48] hahaha [22:22:56] gtg for now, but I'm working tomorrow [22:23:01] o/ [22:26:31] o/ [22:26:46] I'm out of here too. Hopefully I can have my bike back :)