[00:02:00] lol I copied & pasted this Begin watching logs, tail -f payments.error payments-initial fundraising-misc fundraising-drupal. [00:02:11] but I actually don't know what directory :-) [00:03:07] ha, it's frlog1001:/var/log/remote/ [00:05:09] ok - just disabling process control [00:08:44] !log process-control config revision is 7598dc1bf9 - jobs disabled [00:08:50] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [00:09:46] ok everything is in maintenance mode [00:10:16] Cool from my end too - [00:10:16] if run-job -r [00:10:29] shows empty does that mean all the jobs are finished? [00:10:45] no idea [00:10:50] :-) [00:11:00] Also are ichinga alerts still relevant? [00:12:05] they are yeah, I took catchpoint off the list because we're not using that service afaik, SRE is supposed to be cooking up some alternative, at any rate catchpoint doesn't recognize my login anymore so :-P [00:12:19] :-) [00:12:33] ok I think that line isn't cross off [00:12:49] I can see ls /var/log/process-control/ -altr is not showing any date updates so I think all jobs are finished [00:12:52] the icinga one? I crossed it off partially b/c I didn't disable payments* [00:13:11] ok, let me see if I can figure out whether redis has any messages left [00:13:57] hrm it says it has 64 messages in payments-init [00:14:20] do you know how to run whichever consumer is necessary to process those? [00:15:31] hmm - I'm not sure [00:16:13] I put the script to drop triggers in my home dir - drop_triggers.sql - it actually leaves a few behind but as long as there are non on wmf_donor table or civicrm_contribution table that's ok for this [00:16:52] ok [00:18:50] can we try running consumers manually, if we can flush that queue I'll feel ok about proceeding with the OS/redis upgrade for the queue servers [00:19:04] sure [00:19:09] do you know which one? [00:19:21] is it just the main donation queue? [00:19:29] oh it's payments-init [00:19:37] donations is 3, jobs is 0, jobs-adyen is 0, jobs-paypal is 0, payments-antifraud is 0, payments-init is 64, pending is 0, recurring is 0, refund is 0, unsubscribe is 0 [00:19:48] that's debug output from the nagios check ^^^ [00:21:57] I think it's this one fredge_multiqueue_consumer [00:22:14] ok [00:22:18] Jeff_Green: I just did run-job fredge_multiqueue_consumer [00:22:22] did that clear it? [00:22:26] checking [00:23:21] it cleared payments-init, and now there are three messages in 'donations' [00:24:03] OK - I just ran run-job donations_queue_consume [00:24:16] woot. all clear! thanks! [00:24:21] great [00:24:28] exit [00:24:30] whoops [00:24:47] ? [00:24:49] alright, so we're ready to do triggers right? [00:24:54] yep [00:25:02] oh I tried to log out of frqueue1001 and typed 'exit' here is all [00:25:19] :-) [00:25:30] no cause for alarm! [00:26:09] drop_triggers.sql is in your homedir on civi1001? [00:26:13] yep [00:26:23] ok [00:26:47] I just pushed out the code change too (although nothing will change until db update runs) [00:26:55] ok [00:28:34] so iirc it's: mysql civicrm < drop_triggers.sql [00:28:44] these are triggers on civicrm db right, not drupal? [00:29:01] yes [00:29:17] the key thing is the ones on civicrm_contribution are gone [00:29:23] ok here goes nothing! [00:29:30] done [00:29:44] cool - are there about 40 triggers now? [00:29:47] there are 48 [00:29:57] & none on civicrm_contribution or wmf_donor? [00:30:40] civicrm_contribution_after_insert INSERT civicrm_contribution [00:30:55] civicrm_contribution_after_update UPDATE civicrm_contribution [00:31:00] ok - but it's a really simple one updating contact modified date? [00:31:19] lemme copy the output to civi1001 so you can look too [00:31:24] cool [00:32:07] civi1001:/tmp/triggers.20190718 [00:32:41] hmm no we want those gone - can you drop those 2 triggers? [00:32:57] We are about to delete some fields they update so it matters more this update than most [00:33:06] sure [00:33:35] just drop civicrm_contribution_after_insert and civicrm_contribution_after_update ? [00:33:40] yep [00:34:00] actually delete the ones on wmf_donor_after_insert [00:34:07] (if it's easier delete them all) [00:34:17] errrrr [00:34:29] i already did civicrm_contribution_after_insert and civicrm_contribution_after_update [00:34:40] yep - also wmf_donor_after_update [00:34:48] wmf_donor_after_insert [00:35:03] done [00:35:09] actually they don't really matter I guess - but I have some hopes they won't come back [00:35:16] also wmf_donor_after_delete [00:35:17] keep wmf_donor_after_insert ? [00:35:21] nah [00:35:32] ideally no triggers on civicrm_contribution or wmf_donor at this stage [00:36:58] we still have civicrm_contribution_after_delete [00:37:08] is that the one the script just added? [00:37:13] yeah ideally remove it too [00:37:49] ok done [00:38:01] ok I'll start the . slooooowww db updates [00:38:09] ok [00:39:01] first query . running ok - will take about 25 mins [00:39:07] would this be a sane time to upgrade queue servers? [00:39:09] or an insane time? [00:39:17] I think it's fine [00:39:39] I don't fully understand what you are wanting to do but 1) queues are empty & down [00:40:03] & . 2) the update running is just a query to drop some columns & add some columns & add an index to another table [00:40:13] so slow -but not complex [00:40:58] just checked diskspace - loads of space for the alter statements but 73% /srv/archive/banner_logs seems higher than I expected [00:41:04] i want to do a Debian do-release-upgrade to upgrade each queue server from Jessie to Stretch, which will also bump the Redis version [00:41:51] it's ok -- /srv/archive/banner_logs is an nfs mount from americium, so it's disk over there that's at 73% not civi1001 [00:42:17] eventually we'll have to purge some old banner logs but we should be good for quite a while [00:45:24] upgrading frqueue2001 [00:45:28] cool [00:53:54] (PS1) Eileen: Add triggers_drop.sql [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524389 [00:59:18] Jeff_Green: if you have and bandwidth ( in both senses) over & above the queue upgrades at the moment there is a file I'm trying to get transferred from the office file server to frdev - but I can't get in on the vpn [00:59:56] ah, sorry I can't help with that, I've never configured/tried/used the office vpn [01:00:01] i figure if it's in the office . . . run! [01:00:03] ok nw [01:06:39] the first big query has finished -but I think there are 3 more to go [01:06:51] alnilam is done, frqueue1002 is in progress, frqueue1001 is next [01:06:52] cool [01:09:15] (PS1) Eileen: Remove trailing spaces [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524391 [01:17:50] ejegg: are you popping in on your day off? [01:18:05] or did your device do that without your consent :-) [01:22:26] heh, trying to check in [01:22:43] but this hostel's wifi is pretty spotty [01:23:04] looks like the update is still underway [01:23:19] any unexpected roadblocks? [01:24:34] so far things are going ok. I'm doing the frqueue upgrade to stretch while eileen does the database stuff [01:26:14] yeah we are on the add fields update [01:26:46] I got all my bits & pieces merged by cstone prior to starting but in about 50 mins when we reload triggers that's when the rubber hits the road [01:28:53] cool! [01:29:32] I'm about to head to bed here. hopefully I can find a better connection to get some work done tomorrow [01:29:41] good luck with the rest [01:35:35] queue servers are done! [01:43:11] yay - we probably have a bit left on the queries.... [01:45:47] ok [01:47:23] fundraising-tech-ops: EPIC: migrate fundraising off of Debian Jessie - https://phabricator.wikimedia.org/T185013 (Jgreen) [01:47:26] fundraising-tech-ops: upgrade fundraising queue servers from Debian Jessie - https://phabricator.wikimedia.org/T221008 (Jgreen) Open→Resolved a:Jgreen This is done! [02:02:09] PROBLEM - check_mysql on frdb2001 is CRITICAL: Slave IO: No Slave SQL: No Seconds Behind Master: (null) [02:06:52] (PS1) Eileen: Separate do_not_solicit in merge code from otherwise calculated fields [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524393 [02:07:09] PROBLEM - check_mysql on frdb2001 is CRITICAL: Slave IO: No Slave SQL: No Seconds Behind Master: (null) [02:07:35] ^^that's just the nagios downtime expiring after 2H, fixing [02:08:45] ah cool [02:09:52] did you get through your changes [02:10:09] yep! [02:13:15] the biggest query just finished [02:13:54] I feel like the remaining one might be more like 5-10 mins [02:15:47] ok - that one finished - yay [02:16:43] wow that was quick [02:16:47] so that's it for queries? [02:18:03] yes - I'm re-enabling logging now [02:18:18] & that will generate the trigger mysql to reload [02:18:45] ok [02:19:08] ah dang - it's adding some fields when I re-enable - but not to a large table - ie log_civicrm_value_1_communication_4 [02:21:29] hmm - not so small after all [02:27:43] (PS1) Eileen: Updated triggers after extra fields [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524394 [02:29:14] Jeff_Green: new triggers are in new_triggers.sql [02:29:24] ok [02:29:33] should I apply them now? [02:29:40] (CR) jerkins-bot: [V: -1] Updated triggers after extra fields [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524394 (owner: Eileen) [02:30:00] (PS2) Eileen: Updated triggers after extra fields [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524394 [02:30:19] Jeff_Green: yes - I just put them in gerrit & the changes look right - https://gerrit.wikimedia.org/r/#/c/wikimedia/fundraising/crm/+/524394/2/sites/all/modules/wmf_civicrm/scripts/triggers.mysql [02:30:41] so pull em in [02:32:13] PROBLEM - check_mysql on frdb1002 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1271 [02:32:13] ok [02:34:33] let me know when loaded [02:35:13] PROBLEM - check_mysql on frdev1001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1449 [02:37:13] PROBLEM - check_mysql on frdb1002 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1571 [02:37:55] eileen: I'm failing to find a quick way to fetch the right version from gerrit [02:38:13] Jeff_Green: no that 's a double check - it's in my home dir [02:38:21] oh! duh ok got it [02:38:23] new_triggers.sql [02:39:51] done! [02:40:04] 458 triggers now [02:40:13] PROBLEM - check_mysql on frdev1001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1750 [02:40:19] cool now I gotta try slow-start [02:41:29] hmm there was nothing to start! [02:41:45] nothing to import I mean [02:42:08] because the queues are empty? or is it importing from elsewhere? [02:42:13] PROBLEM - check_mysql on frdb1002 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1871 [02:42:17] I think the queue is empty [02:42:28] I just tried run-job donations_queue_consume [02:43:12] should I take payments and payments-listener out of downtime so we can catch a transaction or two? [02:43:20] yeah - sounds good [02:43:22] ok [02:43:35] can I take civicrm out of maintenance mode too? [02:43:39] yep [02:48:08] ok I just did a $3 visa donation [02:48:40] cool [02:49:31] this is my favorite! "18. Convince each other for a bit that it all works" [02:50:12] eileen: i believe process-control comes back online when we take civi1001 out of maintenance mode, I hope that's ok [02:50:38] yeah that's ok [02:50:46] I just got a slow search unfortunately [02:51:22] mariadb ~should~ be heating up indexes when it starts, but it's possible it doesn't [02:51:43] ah ok that could be it [02:52:04] I was definitely not hitting an index I thought it should [02:52:21] we added a bunch more indexes so I was worried we might have just overloaded them [02:52:58] ok [02:53:30] my second search was quicker - but we'll need to keep an eye on it for a bit - we won't be doing anything today anyway [02:54:01] ok, I can try to figure out where we are re. mariadb vs RAM limits tomorrow [02:54:39] frdb1001 is reporting 113GB free RAM at the moment [02:54:50] cool - we did add a bunch of new indexed calculated fields - it all tested well on staging but if we are banging up against something we should find that out [02:55:09] We go soo much speed boost of the php upgrade I figured we could waste some of it on usability :-) [02:55:43] sounds good to me! [02:55:59] that's pretty cool re. php, I didn't anticipate it would be that significant [02:56:49] Yeah it is visible in the graphs [02:56:56] do you think it's safe to reenable replication on frdb2001 yet? [02:57:07] Yes - I'm gonna re-enable the jobs too [02:57:10] ok [02:59:11] I'm going to let the icinga downtimes for frdb1002 and frdb2001 expire on their own, they'll expire within 2H and I will be alerted b/c my phone will go bonkers if they're still behind [03:01:32] ok so from the list I think we just need to check if we need to update docs for next time & email people [03:01:59] looking... [03:03:33] I took the catchpoint reference out of the collabwiki doc [03:03:37] cool [03:03:45] otherwise I don't think we changed anything [03:03:47] !log process-control config revision is 7598dc1bf9 (jobs reenabled) [03:03:52] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [03:04:18] do you want to do the email? [03:04:20] ok - [03:04:51] cool. can you mention that we've upgraded the queue servers to stretch, and we're now on redis 3:3.2.6 [03:04:57] we only have 3 donations -since the outage - yours & 2 others - I guess with no banners & no campaigns.... [03:05:02] yeah [03:05:12] it was pretty quiet before we started too [03:06:15] is there anything else I can do, or are we good? [03:07:51] I think we are good [03:08:27] ok, then I'm off to bed! please text me if you discover anything surprising/broken [03:09:03] thanks for all the help on this! [03:09:32] thank YOU! [03:15:11] RECOVERY - check_mysql on frdev1001 is OK: Uptime: 130563 Threads: 1 Questions: 7231917 Slow queries: 34394 Opens: 17101 Flush tables: 1 Open tables: 200 Queries per second avg: 55.390 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [03:17:13] RECOVERY - check_mysql on frdb1002 is OK: Uptime: 6096421 Threads: 1 Questions: 169463373 Slow queries: 1120690 Opens: 55152545 Flush tables: 1 Open tables: 199 Queries per second avg: 27.797 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [03:30:18] I'm just trying to confirm the jobs are running - we got some failmail so I guess that's good ;-0 [03:45:22] hmm - no new donations since I manually ran them - but I also don't know if that's just no-one is donating right now [03:45:31] gonna get a cuppa & see how it's going [04:11:39] !log I think I didn't push the turn it on commit - tried again process-control config revision is 9f7eba2193 [04:11:46] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [04:12:09] PROBLEM - check_mysql on frdb2001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 7266 [04:17:11] PROBLEM - check_mysql on frdb2001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 7566 [04:57:09] RECOVERY - check_mysql on frdb2001 is OK: Uptime: 6102218 Threads: 1 Questions: 168782885 Slow queries: 1118535 Opens: 55247615 Flush tables: 1 Open tables: 200 Queries per second avg: 27.659 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [05:09:56] (PS1) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [05:19:35] Fundraising-Backlog: Add slow start option to donation queue consumer - https://phabricator.wikimedia.org/T228488 (Eileenmcnaughton) [05:25:08] Fundraising-Backlog: Add slow start option to donation queue consumer - https://phabricator.wikimedia.org/T228488 (Eileenmcnaughton) [05:31:55] (PS2) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [05:35:53] (PS3) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [05:43:55] (PS4) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [05:51:55] (PS5) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [05:52:54] (CR) jerkins-bot: [V: -1] Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) (owner: Eileen) [05:57:08] (PS6) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [06:21:31] (PS7) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [06:40:46] (PS8) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [06:43:50] (PS9) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [06:47:46] (PS10) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [06:51:19] (PS11) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [08:02:36] (PS12) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [08:14:42] (PS13) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [08:26:53] (PS14) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [13:28:45] (PS11) Vedmaka Wakalaka: Campaign fallback [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/517931 (https://phabricator.wikimedia.org/T124969) [13:47:35] (PS12) Vedmaka Wakalaka: Campaign fallback [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/517931 (https://phabricator.wikimedia.org/T124969) [14:08:24] hi fr-tech! [14:12:42] ejegg: ¡jeló! [14:29:33] cstone: looks like I need to add the gateway_txn_id to the messages from the front end - not seeing them in my local tests [14:35:59] ejegg ah okay I wasn't either [14:36:01] also hello! [14:48:20] hi cstone :) the xdebug demo I've having with our gsoc student is running over, could we push our call ahead 30 minutes? [14:48:33] sure [14:54:17] thanks! [14:59:59] (PS1) Ejegg: WIP deal with gateway_txn_id in a standard way [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/524535 [15:02:40] (CR) jerkins-bot: [V: -1] WIP deal with gateway_txn_id in a standard way [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/524535 (owner: Ejegg) [15:15:43] (PS1) Ejegg: Add gateway_txn_id to subscr_start message [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/524538 (https://phabricator.wikimedia.org/T216560) [15:16:41] cstone I'm trying to slay some tech debt at the same time as I get the txn id into the message, but it's not as easy as I'd hoped [15:17:02] so in the meantime I made a bandaid style patch to accomplish the same thing without touching the other adapters [15:17:19] bandaid patch (should be good for testing): https://gerrit.wikimedia.org/r/524538 [15:17:33] ok. I like the term slaying tech debt [15:17:40] tech-debt-slaying, but failing, patch: https://gerrit.wikimedia.org/r/524535 [15:18:48] for the recurring payment tokens, should they be different between the one time payment and the recurring queue one? I am seeing the same token there [15:23:58] cstone yep, it's the same token [15:26:21] ok and in the importSubscriptionSignup its looking for installments and create_date on the message, those are paypal specific things? [15:27:41] cstone ah, good point - i should add create_date to the message from the front end too [15:28:03] installments shouldn't be needed, even for paypal [15:28:05] (PS13) Vedmaka Wakalaka: Campaign fallback [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/517931 (https://phabricator.wikimedia.org/T124969) [15:28:21] since we don't do any time-limited (i.e. just make 12 payments) recurring donations [15:28:21] ah okay [15:28:39] will update the bandaid patch with create_date [15:29:05] though i think the recurring queue consumer defaults to using the 'date' message field if that doesn't exist [15:29:37] be a few mins late cstone just wrapping up [15:29:53] sorry about this! [15:31:17] no problem [15:31:57] (PS2) Ejegg: Add gateway_txn_id to subscr_start message [extensions/DonationInterface] - https://gerrit.wikimedia.org/r/524538 (https://phabricator.wikimedia.org/T216560) [15:34:27] fr-tech I make take some more time off this afternoon [15:34:46] this part of this trip is looking complicated [15:35:44] ready cstone ! [15:44:03] (PS14) Vedmaka Wakalaka: Campaign fallback [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/517931 (https://phabricator.wikimedia.org/T124969) [16:28:32] fr-tech, I've finally saved this command that have to remember every time I want to easily see the contents of one of our queue messaged in redis. It might be useful for others too! https://www.commandlinefu.com/commands/view/24625/pretty-print-json-block-that-has-quotes-escaped [16:28:48] messages* [17:23:27] (PS15) Vedmaka Wakalaka: Campaign fallback [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/517931 (https://phabricator.wikimedia.org/T124969) [18:25:11] (CR) XenoRyet: "Looks good, just looks like a stray file crept in there. Take that back out and I'm happy to +2" [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) (owner: Eileen) [20:06:27] (PS15) Eileen: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) [20:06:32] (CR) Eileen: "The 1 is gone" [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) (owner: Eileen) [20:09:52] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: make sure Forget Me requests reach IBM - https://phabricator.wikimedia.org/T222287 (Eileenmcnaughton) @MBeat33 BUT there is a UI way to do forget me in the IBM interface - you can upload a csv of emails & it will forget them - that is the same thi... [20:23:19] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: make sure Forget Me requests reach IBM - https://phabricator.wikimedia.org/T222287 (Eileenmcnaughton) Hmm the flaw in the UI is that you have to go through each DB in turn - plus they put a scary message when you go to access it that leaves me in... [20:25:18] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: make sure Forget Me requests reach IBM - https://phabricator.wikimedia.org/T222287 (CCogdill_WMF) Yep, we'd need to forget in each DB. And yes, the UI is spooky! But it works. [20:33:18] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: make sure Forget Me requests reach IBM - https://phabricator.wikimedia.org/T222287 (MBeat33) Open→Resolved a:MBeat33 Thanks for the extra info, y'all. Sandra's got only 16 left, so I think we'll stick w/re-forgetting in Civi. But this... [20:37:19] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: Upgrade permissions level for Payments Specialist - https://phabricator.wikimedia.org/T228480 (Eileenmcnaughton) @mbeat33 - try now [20:42:09] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Civi: Upgrade permissions level for Payments Specialist - https://phabricator.wikimedia.org/T228480 (MBeat33) @EMartin When you have a moment, want to see if https://civicrm.wikimedia.org/civicrm/fredge will now give good results from a .csv file? [20:46:25] Fundraising Sprint Never Ending Query, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Add new calendar year fields to silverpop export - https://phabricator.wikimedia.org/T228241 (Eileenmcnaughton) @CCogdill_WMF I think you need to add fields to silverpop for the new fields you want from this li... [20:50:33] Fundraising Sprint Never Ending Query, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Add new calendar year fields to silverpop export - https://phabricator.wikimedia.org/T228241 (CCogdill_WMF) It's easiest to add the fields once they appear in the export file. Are they there already, just not p... [21:08:42] Fundraising Sprint Never Ending Query, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Add new calendar year fields to silverpop export - https://phabricator.wikimedia.org/T228241 (Eileenmcnaughton) @CCogdill_WMF no, not yet but if that's the easiest order then I'll add them - hopefully next week [21:13:43] Fundraising Sprint Never Ending Query, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Add new calendar year fields to silverpop export - https://phabricator.wikimedia.org/T228241 (CCogdill_WMF) Cool, thank you! That way we just disable the current import and set up a new one mapping the new fiel... [21:16:38] (CR) XenoRyet: [C: +2] Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) (owner: Eileen) [21:17:53] thanks XenoRyet [21:17:58] No worries [21:21:48] (Merged) jenkins-bot: Add job to backfill numbers for wmf_donor [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/524399 (https://phabricator.wikimedia.org/T228242) (owner: Eileen) [21:22:49] (PS1) Eileen: Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - https://gerrit.wikimedia.org/r/524596 [21:23:07] (CR) Eileen: [C: +2] Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - https://gerrit.wikimedia.org/r/524596 (owner: Eileen) [21:23:45] (Merged) jenkins-bot: Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - https://gerrit.wikimedia.org/r/524596 (owner: Eileen) [21:25:40] !log civicrm revision changed from 21d3c5a3fc to f932e56cd2, config revision is 9f7eba2193 [21:25:47] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [21:28:52] (PS3) Jforrester: Replace references to deprecated Squid config with modern CDN ones [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/514396 [21:28:54] (PS1) Jforrester: Use MediaWikiServices instead of $wgContLang and $wgParser [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/524597 (https://phabricator.wikimedia.org/T160811) [21:31:37] (CR) Jforrester: "This is the last production code path that uses the old config names. (We're not planning to kill off the old values any time soon, but…)" [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/514396 (owner: Jforrester) [21:40:54] hmm the job seems to be doing smaller batches than I think it should but it seems to work - perhaps I'll schedule it & by the time I look at the batch size it will have made a lot of progress anyway [21:50:04] XenoRyet: if you are still about can you approve the commit on process control so I can push it out - it just adds the job [21:50:46] Yea, let me take a look [21:51:03] it's pretty conservative but it will do 10000 every 2 mins - so 300k an hour I guess [21:51:42] I've been testing it with bigger numbers in the UI - [21:51:50] sorry with drush on the command line [21:51:54] Cool [21:54:28] ok to gtg? [21:54:49] Yep, looks good [21:55:50] Go ahead and push it whenever you're ready. [21:57:10] !log update process control process-control config revision is c913a5f261 [21:57:17] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [21:57:27] ok so prior to deploying that [21:57:32] 138347 Contacts [21:57:32] Largest Donation ≥ 1.00 [21:57:54] - ie largest donation is a new field so that 130k of contacts is the number populated [21:58:31] (mostly by me running the drush in the command line - only about 10k were 'naturally' populated) [21:58:39] I'll check back in an hour & see [22:15:00] well it's only gone up by 10k since then but still I suspect over the weekend it will make big inroads & I can decide about speeding it up on Monday [22:16:51] nope there's something wrong [22:19:15] or maybe [22:27:19] no - it's just that the batch limit is a bit low I think [22:29:55] XenoRyet: I think I'm gonna pump it up to 50k per run - I guess did 200 k on the command line & it seemed ok [22:30:10] Sounds reasonable [22:32:15] Do you want to check the commit - it's very straight forward [22:32:43] Yea, if you want me to. Though that's straightforward enough you can probably just push it if you want. [22:32:51] I'll just push it [22:33:05] Yea, tiny changes like that I just push myself as well [22:33:28] the process on process-control is kinda clunky [22:33:48] It is a bit, yea [22:35:51] dstrine: having carefully set expectations that the backfill could take weeks my revised estimate is that it should be done by Monday [23:08:30] oh yay!! [23:08:40] eileen: that's great