[00:22:14] Gotta run! [02:04:05] hey, do you still use any of this stuff? [02:04:22] http://civicrm.frdev.wikimedia.org/ [02:04:32] http://civicrm-gr.frdev.wikimedia.org/ [02:04:38] http://civicrm2-gr.frdev.wikimedia.org/ [02:04:45] they are all "connection was reset" [02:04:52] i just see it as cruft in DNS? [13:53:11] Wikimedia-Fundraising, Mobile: Mobile fundraising banners appearing within other apps, moving up the screen - https://phabricator.wikimedia.org/T100421#1341302 (Pcoombe) Wikimedia Italia's banners are only on it.wikipedia.org though (and I'd point out that we weren't informed about them, it was just a loc... [15:38:28] jeff_green: i have 1 3TB disk for barium. Let's schedule downtime for next week [15:38:46] wed or thurs morning [15:41:41] ah great [15:42:05] is it the same drive that died last time? [15:42:17] err not the new one, but the same drive on the system [15:42:21] different disk [15:42:37] so both 3TB disks failed within a few months of eachother [15:42:52] the last time it was the 4th disk..this time the 3rd [15:43:00] yeah and 1 of the 3Tb disk failed on aluminium [15:43:05] that has dual failure [15:43:13] a 500Gb and 3TB [15:43:25] aluminium is borked [15:43:28] yeah [15:43:35] may need to reinstall [15:43:59] that's fine, I was thinking about upgrading it to trusty anyway [15:44:23] do you want to send the email since you're doing the repair? [15:44:30] i think we should request new hardware for it....the server is pretty old now [15:44:48] how long before it's out of warranty? [15:44:57] is that even a positive number? [15:44:58] it's been out of warranty [15:45:09] :-P ok [15:45:25] 1.5 years out of warranty [15:45:35] purchased in 2011 [15:46:02] ok [15:46:12] i'm cool with replacing it altogether instead of repairing it [15:47:09] okay...i will put in a h/w request....may need you to add your $.02 [15:47:18] ok [17:04:43] http://www.timescall.com/news/ci_28256115/berthoud-tornadoes-destroy-3-damage-12-homes [17:04:59] crazy el nino summers [17:05:35] wouldn't be surprised if we lose power today [17:24:04] cwdent: wow! [17:24:46] not very far from my house [17:24:51] there was also baseball size hail [17:25:18] crazy... or... I guess that's not normal there? [17:25:28] The guy's fish tank survived! What a polite tornado [17:25:50] awight: fantastic initiative you took there getting everyone in a circle, collective notetaking, etc \o/ way to go [17:26:56] woot! thanks. it was fun [17:27:08] yeah that was a great call [17:27:18] ya was fun! [17:27:21] and no AndyRussG not normal this close to the mountains [17:27:36] Ah hmmm 8p [17:27:40] also the tornado was moving west which is really strange [17:27:59] Maybe it was lost [17:28:15] hehe [17:28:18] wayward twister [17:28:31] Thanks for the great point, too, AndyRussG. Trevor added to it nicely. I'd love to brainwash the new execs at the anarchosyndicalist day camp [17:28:48] heh, the long and winding twister road [17:29:13] awight: thx, yea definitely worth trying [17:29:36] That's totally the key, acculturation rather than class warfare. [17:29:49] I know of at least one example (from quite a different context) where it has worked, in fact [17:29:58] * Nemo_bis heard that Gramsci's egemonia culturale theory is strong in UA [17:30:02] * USA [17:30:05] I honestly think they'd be relieved if we started a workers' coop and started making all the difficult decisions ourselves. [17:30:18] Hi! [17:30:40] Nemo_bis: I don't think we call it that though, cos our schools are shit. [17:30:53] Nemo_bis: certainly cultural hegemony is hard at work there (and here too) [17:30:55] On a very much more mundane note, does anyone know which analytics group I should ask to join? https://wikitech.wikimedia.org/wiki/Analytics/Data_access#Access_Groups [17:31:06] We call it T.V. [17:31:45] Did you floks see the news item about the massive digital TV in Mexico? [17:31:53] AndyRussG: I think, just ask for Hive access [17:31:56] I meant, massive digital TV _giveaway_ [17:32:00] nooo [17:32:20] It's been going on for a while but the other day even the NYT picked up on it [17:32:49] awight: hmm OK [17:33:12] Oh, I've been meaning to share this link, I think it's the best reporting done so far. Also translated into Spanish. https://firstlook.org/theintercept/2015/05/04/how-43-students-disappeared-in-mexico-part-1/ [17:33:20] Sorry for bringing the mood down... [17:33:46] awight: oh thanks! I hadn't seen that one [17:35:18] There have been a few more... mass assasinations... in the news, since then, some more documented than others, though that's still the iconic one of these times [17:36:03] urrrgh [17:36:31] we can shorten "war on drugs" to just "War on". [17:37:28] indeed... [17:37:48] What got me about this edition of La Jornada (one of the best news sources there, BTW, I guess you know that) is that the political assasination and burning of election papers was back-cover rather than front-cover news: http://www.jornada.unam.mx/2015/06/03/ [17:38:38] La Jornada is such a rag [17:38:46] But so is the New York Times, for that matter [17:39:12] Holy crap [17:39:19] LJ is very less than perfect, but still way above most else [17:40:22] Yeah I noticed that most newspapers are just a local drug guy or innocent in a pool of blood, or butts and boobs [17:40:32] same as here... [17:40:58] Some pretend to be more than that but their disconnect from reality can be insane [17:41:19] Here are some other decent-ish sources: http://aristeguinoticias.com/ http://www.sinembargo.mx/ [17:41:23] oh thx! [17:41:49] yw! [17:41:55] One of my favorite surveys was categorizing TV news programs by their "fluff and mayhem" vs news ratios [17:42:21] Heh sounds like fun [17:43:25] You've seen this? http://en.wikipedia.org/wiki/Mexican_general_election_2006_controversies#Official_count [17:43:27] For a while Aristegui was pooh 'cause she gave a lot of airtime to rabid oponents of a very legitimate student strike, but she's still much closer to real than a lot of 'em [17:44:37] Ah yes I remember that..... [17:45:04] The analysis I read said that there was a clear point in time at which the ratio of Calderon and Lopez's votes vs the total ballots became fixed, and the derivatives of votes for each over time locked to within .1% until the end of tallying [17:46:46] Yea crazy stuff [17:47:16] A friend there at the time sat in on some of the judicial recount and also said it was BS [17:47:44] Yikes! [17:48:01] There comes a point where you just automatically assume the worst, 'cause when you do you're usually right [17:48:50] "Piensa mal y aceratarĂ¡s" [17:48:52] Diebold... [17:50:25] Arg indeed :/ [17:54:37] (CR) Ejegg: [C: 2] Update to the Gerrit php-queue [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/215990 (owner: Awight) [17:55:03] (Merged) jenkins-bot: Update to the Gerrit php-queue [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/215990 (owner: Awight) [18:10:59] atgo: sorry for my illegal attempt to add story points to the bug... I plead ignorance... [18:11:25] I had been thinking of asking you about it but then was just, OK what the heck [18:22:54] awight: ejegg: cwdent: pls LMK if you have any thoughts on this one: https://phabricator.wikimedia.org/T101265 [18:24:01] i wish i did, i am very interested to know the answer [18:28:03] AndyRussG: I don't have any guesses at this point. GeoIP could be the issue, but the library is self-contained [18:28:28] Do we still have S:RI impressions for no banner shown, which include the country? [18:28:53] awight: hmm good point [18:29:12] Now that I've fixed my FR cluster access I should be able to query the DB a bit [18:29:19] Also maybe S:RI for any campaign that's not geolocated [18:29:42] Maybe I'll poke about on ops in a bit [18:29:43] cwdent: awight: thanks BTW! [18:29:44] do you use this ? http://barium-fundraising.wikimedia.org/ , it's like a redirect to civi [18:29:54] just cause i look at DNS zones [18:29:56] mutante: I've never used that [18:30:13] Probably best to avoid coupling hostnames to the service [18:30:13] I'm gonna duck out for a quick errand before it rains here.... back soon :) [18:30:34] awight: thanks, i also have this https://gerrit.wikimedia.org/r/#/c/216023/1/templates/wikimedia.org [18:30:57] mutante: thanks for the reminder, I was gonna -1 that, we use those! [18:31:02] They're https endpoints is all [18:31:21] plus additional security I will not disclose here... [18:31:27] ah, client certificates [18:31:45] what is "gr"? [18:31:49] just curious [18:31:57] a contractor, Giant Rabbit [18:31:59] glad you asked :) [18:32:11] ah!:) yea, i have been on their page, gotcha [18:32:21] let me abandon that then [18:34:24] hehe no worries AndyRussG|bassoo [18:44:04] gonna get food, back shortly [19:49:36] awight: ejegg: cwdent: Hmm this seems to be being a little tardy: select * from bannerimpressions where timestamp like '2015-06-01%' limit 10; [19:50:39] tardy as in...hangs forever? [19:51:06] cwdent: yeah [19:51:21] or untill I got nervious and presed ^C [19:51:28] yeah, fuzzy match on 200M rows... [19:51:41] 200M?!?!! [19:51:47] there's probably a better way to limit that query though [19:51:57] to recent data [19:52:00] Ah right the whol daabase [19:52:02] database [19:52:12] Right, id > something [19:52:14] yeah bannerimpressions is gigantic, but if you said something like [19:52:19] yeah id > x [19:52:33] lop off 99% of the table [19:52:36] Mmmm the timestamp field _is_ indexed... [19:53:12] i think LIKE is pretty heavy though [19:58:00] AndyRussG: yeah, do a range query rather than like [19:58:27] for that one, "where timestamp between '2015-06-01' and '2015-07-01' or somesuch [19:59:17] http://use-the-index-luke.com/sql/where-clause/searching-for-ranges/like-performance-tuning [20:00:03] lol at that domain name [20:00:23] awesome [20:00:29] heh indeed [20:00:30] awight: thx! [20:00:35] cwdent: thx! [20:00:52] the other issue with LIKE is that it is converting all the timestamp data to a string, which is also expensive. [20:01:30] ah hmm riiiiiiiiiiiiight [20:01:58] arg I was assuming it was a string, which I think is what we tend to use in MW [20:01:59] * AndyRussG checks [20:02:19] mysql needs explain analyze [20:02:25] cwdent: it has [20:02:30] "explain select ..." [20:02:32] soooorta [20:02:58] weee battle o' databases ;p [20:03:20] I choose an unknown Semantic Web store [20:07:36] hey dstrine awight are you guys joining the standup? [20:08:28] i'm in a long TPG meeting [20:08:46] sorry! I can attend next week [20:08:53] no worries~ just checking [20:14:36] atgo: argh sorry [20:14:47] no worries - just checking [20:14:56] still going? [20:15:01] naw [20:15:03] k [20:16:01] I haven't touched a thing today, but I'm hoping to peck at implementing the Civi +DAF import. Rosie and I agreed on the template and that's ready to go. [20:18:13] any tips on keeping ssh sessions to the fr cluster from dying after brief inactivity? [20:18:44] I think there's a 20-minute timeout. You're getting disconnected earlier/ [20:18:47] ? [20:29:53] awight: yeah much [21:05:41] ejegg: oops, just realized you're probably waiting on me to rebase the limbo patch [21:06:12] (PS9) Awight: Orphan slayer reads from Memcache [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/211062 (https://phabricator.wikimedia.org/T99017) [21:06:37] ahh, will take another look! [21:07:36] me too :) [21:07:50] That was a long detour, to add Redis... [21:08:33] this bit mostly seems straightforward [21:10:35] well, yeah the magic is $orphans = $this->getOrphans(); [21:12:26] hrm, hold on, I think I still need to mirror to activemq [21:12:26] so now if it crashes / is stoped after fetching a batch of orphans, they're all popped out of the queue [21:12:46] that too... [21:13:06] I could fix [21:13:15] does that worry you enough to do something about it? [21:14:17] I would want to fix it on principle, but I'm not exactly worried about it. I think fatal errors will shake out quickly. [21:18:36] however... this makes pop() really awkward [21:18:49] yeah... [21:18:59] pop-but-not-really-delete [21:19:07] -till-i-say-so [21:19:10] then, pop next undeleted thing :) [21:19:23] * awight hastily rationalized laziness [21:23:15] Another thing that makes me not care is that the buffered ack thing was not part of the business logic. [21:24:43] what did you need to change to mirror to activemq? [21:26:30] I guess, just acking in getOrphans [21:27:22] oh, acking up front instead of the ack bucket? [21:28:47] (PS10) Awight: Orphan slayer reads from frack Redis [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/211062 (https://phabricator.wikimedia.org/T99017) [21:30:20] Nah--in the last iteration of getOrphans, I only deleted from Redis, but we want to ack the STOMP mirror too [21:30:47] yeah, but you were still using the ack bucket in the rectify loop [21:31:08] looks at least as good moved to getOrphans though! [21:31:34] oh wat [21:31:46] oh wait, you still have it in the rectify loop [21:34:08] thanks for the CR, whew! yeah, in getStompOrphans the ack'ing is only for "false orphans", so I guess I should be removing from the rectify clause and stick with simpler pop behavior, cos we want all orphans > minage to be gone when we exit the script. [21:35:05] (PS11) Awight: Orphan slayer reads from frack Redis [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/211062 (https://phabricator.wikimedia.org/T99017) [21:37:01] hmm, ok, what about errors during rectifying? Can we still throw those messages back? [21:41:23] oh jeez, i think i'm just generating a random order id for astropay and not storing it anywhere. I should send ct_id as our invoice # [21:42:30] ejegg: n.b. the ctid might be repeated up to three times, I donno if AP expects a new invoice # each time [21:42:49] oh right, they do [21:43:13] wait, where do we store the order id for all the other GWs? [21:43:21] thanks, you're right about how rectification worked pre-butchery [21:43:31] ejegg: in the session, IIR [21:44:25] ah, so we've always had to use their txn id to correlate stuff between dbs [21:44:42] and the order id is just handy for logs [21:44:54] not quite following [21:45:05] the "gateway txn id" should be the same as order id [21:45:26] ohhhh... [21:45:37] and for some gateways we do base the order id on ct_id, sometimes exactly, other times with suffixes [21:45:47] k. I need to change some stuff [21:45:57] our terminology is a mess... feel free to document [21:46:48] we send them an invoice ID (our ref #) with the NewInvoice call and get a document id ( their txn ID ) back [21:47:35] Fundraising Sprint M, Fundraising-Backlog: Forms seem to be fitting to size of security text instead of what we had before - https://phabricator.wikimedia.org/T101564#1342494 (atgo) NEW [21:47:57] right now i'm sending the auto-genned order id as invoice ID, and setting gateway txn id to x_document in the return [21:48:04] oh crap, only in the resultswitcher! [21:48:11] yeah... definitely needs work [21:48:34] nice to have caught it so early! [21:49:06] ahh, no, they don't even give us a doc id at NewInvoice! [21:49:15] just the redirect link. [21:49:53] so I definitely need to use the ct_id as our invoice_id [21:50:13] ugh, then that also has to be the correlation ID for the 'pending' message [21:50:42] while the rest of 'em are using the gateway + txn id [21:51:09] dang... [21:51:15] wait, why's that? [21:51:35] I mean, we should stick to gw+txnid if at all possible [21:51:42] We don't get a txn ID back from Astropay until the user has filled stuff out on their site [21:51:58] oh sorry--yeah we should use gw+ctid then [21:52:17] yeah, looks like the thing! [21:52:20] whew... [22:08:21] (PS3) Awight: Remove unused logfile parsing code [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/207742 [22:14:28] arg there were no non-geotargeted campaigns up during our June 1 outage, so I don't have a way of checking if it was a geoip outage [22:15:29] Do we send S:RI for empty impressions? [22:15:44] if so, u could check Hive. It only takes a few hours per query [22:17:00] awight: right... I think we do that sampled, lemme check... [22:17:43] kick me please: https://gerrit.wikimedia.org/r/#/c/214779/ [22:24:43] awight: predis backend will use default port? [22:24:59] yah [22:25:03] k [22:37:05] wow, this orderIDMeta stuff is pretty baroque [23:06:45] (PS1) Ejegg: Use ct_id.numAttempt format for Astropay order number [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/216338 [23:11:08] ejegg: that... just happened. [23:11:18] Might be why K4 took a long vacation :p [23:11:34] * ejegg checks email warily [23:11:51] I mean, something along that vein was necessary, but probably like a 4-line array that gives the priority of each source? [23:12:44] oh, just the order_id_meta stuff [23:13:16] ejegg: do you feel like merging that test data branch? i have some stuff to push but it's sitting on top of the unsquashed commits from that change [23:13:18] yeah that's what I was on about [23:13:56] cwdent: oh, sure! It looked good in gerrit - lemme just run the stuff locally [23:14:13] (CR) Awight: [C: 2] "Great idea!" [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/216338 (owner: Ejegg) [23:14:18] thanks! [23:14:40] (Merged) jenkins-bot: Use ct_id.numAttempt format for Astropay order number [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/216338 (owner: Ejegg) [23:14:58] awight: thanks! [23:16:30] np! Getting my butt kicked trying to run my orphan limbo locally... [23:20:30] cwdent: what do you think of adding a million to each contact / contrib ID? [23:20:36] awight: ejegg: cwdent: anyone remember what the count field in pgehres.bannerimpressions table is about? [23:21:04] AndyRussG: that's the aggregate number of impressions that match the rest of the columns, for that 15 minute period [23:21:27] always multiple of 100 'cos it assumes the sample rate is 1:100 and inflates accordingly [23:21:37] ejegg: i could do that, but what does it test? [23:22:45] cwdent: might avoid conflicts in case you're using civi for anything but dash [23:23:31] I think the admin user gets contact ID 1, for example [23:23:42] so that might be there as soon as you install [23:23:44] ah yeah, good point [23:23:58] i could just remove the ids and let them auto increment too [23:24:18] yeah, but then fks bite ya [23:24:35] or the inserts get ugly [23:24:58] yeah i think some of that ugliness is already in there [23:26:17] yeah if you think adding a million is sufficient that's nice and easy [23:26:20] aww, you're planning to insert civi data using raw SQL? [23:26:27] that will result in all sorts of borkenness [23:26:47] yeah i mean this only works for testing dash right now [23:27:26] It would be nice to have a solution that doesn't break Civi, so we can enable test data by default in the vagrant role, for example. [23:27:37] also, cos it's a pretty huge PITA to reinstall the Civi db. [23:27:51] The checks thing didn't work? [23:27:52] for sure, i just didn't know the entry points [23:28:23] well they weren't making rows in civicrm_contribution, and i needed bannerimpressions to correlate too [23:28:25] Sorry if I'm ragging on u, I've just hosed my Civi db so many times doing similar tricks... [23:28:42] no i get it [23:28:43] ejegg: thanks! argh, right, it's all making a key out of timestamp, banner, campaign, project, lang, country... [23:29:01] for banner impressions, you could insert contrib_id=(select random from civi) [23:29:03] i just felt like i needed to get something rolling and learning all the civi entry points was going to take forever [23:29:16] cwdent: wait, checks import wasn't adding stuff to contributions? [23:29:29] I thought that was its primary function... [23:29:33] no, just drupal.contribution_tracking [23:29:39] but not civi [23:30:05] so to get a large enough sample set we could use casperjs or selenium or something [23:30:13] harrr [23:30:16] something must not be configured right... this was in vagrant civi, or local box? [23:30:22] in vagrant [23:30:47] I thought the issue was that it was not writing to contribution_tracking. HRm [23:31:06] errr maybe that was it... [23:31:37] which actually would be ok because it turns out i don't need anything from the civi db for this [23:31:40] ok I desist for now, feel free to spam up the dbs, just... make a card for doing it another way! [23:32:13] well i wasn't planning to run this against anything except a blank local db [23:32:25] cwdent: awight: ejegg: bothering u again... do we have like a standard method of generating graphs like this one that the-wub made? https://docs.google.com/spreadsheets/d/1QkUUOf3YW8QAidnIqkAT4wjcvfalFFNyM2OFHHdF5Sw/edit?pli=1#gid=1808269283 [23:32:59] naw, that's pretty though [23:33:09] I want to do another with a finer grain than the 1-hr aggregates he used [23:33:15] Indeed :) [23:33:31] ooh, looks like built in to google docs [23:33:35] Maybe ask the-wub to share the doc so you can get the source? [23:33:37] K I'll just see what standard droob I can do with libreoffice [23:34:15] Well I did fetch the whole bannerimpressions table for May 31-June2 inclusive [23:34:19] hrm? Anonymous Wolverine is viewing [23:34:30] i'll make a card for generating real civi test data though. do we have browser tests running somewhere? [23:34:58] that seems like the best way to automate... [23:35:04] right? [23:35:07] we don't [23:35:23] Probably not a good way to automate cos it's incredibly slow. [23:35:32] ejegg: on the Google Doc? the-wub didn't make it protected and I'm viewing on my browser where I don't log in to free privacy invading pooh services [23:35:34] I think we should either use an import spreadsheet, or the CiviCRM API [23:35:45] so that must be me ;p [23:35:49] hehe [23:35:59] ah, civicrm api [23:36:04] that sounds good [23:36:05] yeeeuch [23:36:10] what about bannerimpressions? [23:36:12] but that would sort of be duplicating code [23:36:17] those... we have no API for [23:36:27] but the schema's much simpler, so SQL is probably fine there. [23:36:48] yeah...it just has to sync up with the contributions tables [23:37:10] u no like insert contrib=(select random from civi)? [23:37:23] sure yeah [23:37:46] what is the api? [23:37:52] for civi? [23:37:55] yeah [23:38:01] it's horrific. [23:38:11] it's an api after all [23:38:28] and, the import does nice things like goes through our custom hooks.... [23:38:41] cwdent: it's code-generated [23:39:12] we tend to use it in the .install files in dirs under sites/all/modules [23:39:19] but, search crm/modules/wmf_civicrm/wmf_civicrm.module for civicrm_api_classapi [23:39:30] aah, so you think just building a big spreadsheet to import is best? [23:39:39] I'm sort of being a dick about: i do [23:39:45] must be the end of the day [23:39:52] /week [23:40:02] [23:40:02] I really am fine with the db shreddage, it's just that then ejegg has to feel the pain too [23:40:05] hehe [23:40:13] nah that's understandable [23:40:19] and he won't be able to do all my CRM homework for me :p [23:40:41] at least he doesn't steal my lunch money [23:40:48] it's ridiculous [23:40:55] stop doing my homework! [23:41:04] I need to... learn something :) [23:41:05] i'm fine scrapping that test data, it was just incidental to writing a couple queries [23:42:01] If it's just for the dash... how about this--you preface with "Create table" statements so we fail hard if this is a real Civi db? [23:42:22] so it can only be run on an empty schema [23:42:42] I'm psyched about having test data, that's been a dream for a while now [23:43:51] i mean, if test data that will work with civi is the goal i'd just as soon do that [23:43:55] cause it will work with dash too [23:44:08] that sounds better to me [23:44:17] holy grail! [23:44:20] i don't mind spending some time on that [23:44:27] i can do roughly the same thing i just did [23:44:34] http://mockaroo.com/ [23:44:41] I'd love to be able to spin up a 10M row test grail that will eat all the resources in my house [23:44:58] full of duplicate email addresses... [23:45:00] i just have to learn what acceptable data looks like [23:45:57] aaaarrrgh [23:46:07] (PS1) Awight: servers don't have to be an array [wikimedia/fundraising/php-queue] - https://gerrit.wikimedia.org/r/216340 [23:49:50] cwdent: excellent point--the online stuff is pretty simple, creating by import will look realistic, but the major gifts ones get nasty. We could model those in a later iteration [23:49:52] (CR) Ejegg: [C: 2 V: 2] "Yep, that \Predis\Client has a pretty darn versatile constructor" [wikimedia/fundraising/php-queue] - https://gerrit.wikimedia.org/r/216340 (owner: Awight) [23:50:56] sounds good, that will be my monday [23:51:03] or maybe my weekend if the weather keeps doing this [23:51:50] awight: I'm really rusty on mw-vagrant usage. I pulled your changes and did another vagrant provision, but I'm not seeing the settings updated [23:52:35] cwdent: no weekend work unless you're willing to sacrifice a weekday to having fun! [23:52:46] should I drop & re-add the fundraising role? [23:52:51] ejegg: the provision should've done it... [23:52:59] no need to re-add, it's just a pointer [23:53:14] this is the file that gets written: vagrant/settings.d/wikis/paymentswiki/settings.d/puppet-managed/10-DonationInterface.php [23:53:39] oh, i'm looking in the wrong spot [23:53:47] so...i have a change to this submodule [23:54:01] just adding wmf_ab to node_modules [23:54:33] ejegg: ^ oh I was gonna ask, do you want to do the dev/prod thing with submodules in dash/ ? [23:54:43] so i'll go in there, check out master, commit [23:55:01] then update the outer repo to the new commit? [23:56:04] awight: yeah... I guess we should. [23:56:57] use the package manager in dev, use a submodule on live? [23:57:03] yep [23:57:17] seems reasonable [23:57:39] so, just check in the package.json change [23:57:40] it's a funny future we live in where we ship apps with deps [23:57:51] hrm suddenly I can't run with minfraud enabled [23:57:55] and small web apps are 500M [23:58:04] gah [23:58:34] whoa.. 150M dash/ [23:58:39] you ain't kidding [23:59:05] disk space is cheap i guess [23:59:08] and dependencies are hard [23:59:56] hah - the biggest node modules are the minifiers