[01:43:24] Fundraising-Backlog: Strict error message in logs (quick tidy up) - https://phabricator.wikimedia.org/T171560#3468769 (Eileenmcnaughton) [05:10:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:15:22] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:20:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:25:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:30:20] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:35:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:40:20] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:45:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:50:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [05:55:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:00:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:05:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:10:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:15:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:20:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:25:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:30:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:35:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:40:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:45:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:50:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [06:55:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:00:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:05:11] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:10:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:15:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:20:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:25:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:30:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:35:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:40:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:45:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:50:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [07:55:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:00:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:05:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:10:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:15:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:20:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:25:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:30:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:35:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:40:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:45:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:50:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [08:55:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:00:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:05:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:10:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:15:10] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:20:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:25:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:30:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:35:09] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:40:19] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:45:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:50:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [09:55:25] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:00:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:05:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:10:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:15:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:20:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:25:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:30:07] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:35:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:40:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:45:06] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:50:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [10:55:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:00:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:05:06] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:10:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:15:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:20:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:25:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:30:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:35:06] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:40:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:45:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:50:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [11:55:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:00:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:05:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:10:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:15:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:20:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:25:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:30:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:35:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:40:06] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:45:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:50:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [12:55:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [13:00:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [13:05:15] PROBLEM - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] [13:09:15] ACKNOWLEDGEMENT - check_raid on frdb2001 is CRITICAL: CRITICAL: HPSA [P420i/slot0: OK, log_1: 3.3TB,RAID1+0 OK, phy_1I:1:6: Predictive Failure] Jeff_Green see T171584 [14:59:16] (PS6) Mepps: Add tests & finalise unsubscribe process [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/365510 (https://phabricator.wikimedia.org/T161760) (owner: Eileen) [14:59:20] (CR) Mepps: [C: 2] Add tests & finalise unsubscribe process [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/365510 (https://phabricator.wikimedia.org/T161760) (owner: Eileen) [15:07:22] (Merged) jenkins-bot: Add tests & finalise unsubscribe process [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/365510 (https://phabricator.wikimedia.org/T161760) (owner: Eileen) [15:18:59] (PS1) Mepps: Fix duplicate check logic [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367682 (https://phabricator.wikimedia.org/T171349) [15:21:16] ejegg ^^ something small i noticed [15:22:20] Fundraising-Backlog, Patch-For-Review: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3461897 (mepps) @ejegg What I notice is our earlier patch handled duplicate invoice ids, not contribution ids. Should we change that patch? [15:29:31] (CR) jerkins-bot: [V: -1] Fix duplicate check logic [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367682 (https://phabricator.wikimedia.org/T171349) (owner: Mepps) [15:45:10] mepps thanks! [15:45:29] ejegg tests currently broken but i'm having issues running them locally [15:45:37] oh shoot, how so? [15:45:53] oh, and do we have a good test case for that? [15:46:35] huh i'm getting this: sh: 1: cv: not found [15:46:35] RuntimeException: Command failed (cv php:boot --level=classloader): [15:46:35] in cv() (line 33 of /var/www/fr-tech/crm/sites/default/civicrm/extensions/org.wikimedia.omnimail/tests/phpunit/bootstrap.php). [15:46:52] mepps ah yeah [15:47:12] Eileen's new extension tests use the civibuild utilities [15:47:18] so they have to be in your path [15:47:28] cv should be in civibuild/bin [15:48:08] mepps another small patch for review (this one PayPal EC-related): https://gerrit.wikimedia.org/r/367624 [15:49:11] fundraising-tech-ops, monitoring: overhaul fundraising cluster monitoring - https://phabricator.wikimedia.org/T91508#3470596 (cwdent) a:cwdent [15:57:14] Fundraising-Backlog, Patch-For-Review: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3470631 (Ejegg) I don't think any of the messages coming in to the queue consumers have contribution IDs. If we did feed in something with a contribution ID, I guess we'd be... [15:57:50] (PS7) Ejegg: Update SmashPig and DonationInterface [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/365069 [16:00:06] (PS2) Ejegg: Fix PayPal EC recurring profile created messages [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/367624 (https://phabricator.wikimedia.org/T171546) [16:09:16] hmm what is cv supposed to alias to ejegg? i added buildkit/bin to my path [16:10:18] mepps for me, cv is just a Box-ed php executable [16:10:27] does it not exist in your buildkit bin? [16:11:30] (CR) Ejegg: [C: -1] Fix duplicate check logic (1 comment) [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367682 (https://phabricator.wikimedia.org/T171349) (owner: Mepps) [16:12:49] i see it now but for some reaosn the script isn't finding it [16:13:55] mepps dang, your $PATH somehow isn't getting through to the exec ? [16:14:02] i guess so? [16:14:29] i'm off to lunch, hopefully will resolve this when i get back [16:14:40] ok, enjoy your meal! [16:38:50] Fundraising Sprint Gondwanaland Reunification Engine, Fundraising Sprint Homebrew Hadron Collider, Fundraising Sprint Ivory Tower Defense Games, Fundraising Sprint Judgement Suspenders, and 7 others: Drush not handling spaces in quotes / schedule Si... - https://phabricator.wikimedia.org/T171435#3470762 [17:35:42] (CR) Ejegg: [C: -1] "Looking really good! Probable debug code found, couple questions inline." (6 comments) [extensions/CentralNotice] - https://gerrit.wikimedia.org/r/364910 (https://phabricator.wikimedia.org/T168673) (owner: AndyRussG) [17:38:15] mepps I seem to have broken the crm tests locally too - lots of failures with undefined index 'location_type_id', using civicrm submodule at master/HEAD [17:38:47] i'm getting super weird fail messages now ejegg [17:38:48] sigh [17:38:54] did you get past the 'missing cv' error? [17:39:38] i think so but my error now is super weird [17:39:51] and are you getting issues other than the "Undefined index: location_type_id [17:39:55] "? [17:40:16] yes running vendor/phpunit/phpunit/phpunit ./modules/wmf_civicrm/tests/ and getting: [17:40:16] Cannot open file "./modules/wmf/civicrm/tests/.php". [17:40:24] urp? [17:40:31] k, that's odd [17:40:35] yeah seroiusly [17:41:07] hmm, I get the same thing [17:41:19] I guess it's expecting a file path, not a dir? [17:41:41] I think you need to use the suite XML anyway, though [17:41:50] to get the bootstrapping [17:42:09] maybe use --group to limit? [17:42:43] vendor/bin/phpunit --group WmfCivicrm runs for me [17:42:53] though with all those undefined index fails [17:44:08] well now i get a new error! [17:44:22] what's that? [17:44:25] PHP Fatal error: Class 'AmazonDonationMessage' not found in /var/www/fr-tech/crm/sites/all/modules/queue2civicrm/tests/phpunit/DonationQueueTest.php on line 323 [17:44:47] hmm, try composer install? [17:45:08] it says all installed [17:45:22] huh [17:45:31] oh right, that's not smashpig, that's just part of the tests [17:47:22] mespps that should be defined in queue2civicrm/tests/includes/Message.php [17:47:32] which is pulled in via queue2civicrm.info [17:47:44] *mepps [17:47:58] make sure that's enabled and drush cc all? [17:48:31] also, make sure you're not skipping the phpunit bootstrap file [17:51:03] it was enabled but for some reason just drush ccing all seems to have let me make progress, tests are in progress now [17:51:26] all sorts of new failures [17:51:48] ok, progress... [17:51:53] hi K4-713 ! [17:51:59] Oh hey. :) [17:52:16] mepps lmk if they're the same 'undefined index location_type_id' that i'm getting [17:53:23] ejegg: I'm going to see if IRC makes sense again. For a long time, people were using it mostly to jump my priority queue... [17:53:36] i think now my issue is because i reinstalled civicrm [17:53:38] good to have you back! [17:54:48] Heh, thanks. [17:55:52] whatup K4 [17:56:46] cwd: yo. [17:59:43] Fundraising-Backlog, MediaWiki-extensions-CentralNotice, MediaWiki-extensions-Translate, Performance-Team, WMDE-Fundraising-CN: WMDE banners failing to save - Timing out on save - https://phabricator.wikimedia.org/T170591#3471410 (Krinkle) (Subscribing, but adding to Radar for now - per IRC c... [18:04:51] (PS8) Ejegg: WIP Enable PHP_CodeSniffer, fix complaints [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/285228 (https://phabricator.wikimedia.org/T133576) (owner: Awight) [18:06:04] (CR) jerkins-bot: [V: -1] WIP Enable PHP_CodeSniffer, fix complaints [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/285228 (https://phabricator.wikimedia.org/T133576) (owner: Awight) [18:06:13] mepps after civi upgrade, drush en `cat sites/default/enabled_modules` and drush updb you still get the same errors? [18:10:07] Fundraising-Backlog, FR-ActiveMQ: Expire pending database entries at some point - https://phabricator.wikimedia.org/T143944#2584285 (Ejegg) We're now expiring them in the expire_pending_messages job Adyen: 31 days Amazon: 31 days AstroPay: 60 days GlobalCollect: 14 days PayPal: 14 days [18:10:19] yes :/ [18:10:36] Fundraising Sprint Muggle Baiting, Fundraising Sprint Nitpicking, Fundraising Sprint Octopus Untangling, Fundraising Sprint Pretending This Isn't Happening, and 9 others: Rewrite orphan rectifier to use the pending database and WmfFramework - https://phabricator.wikimedia.org/T141486#3471555 (Ejeg... [18:10:37] Fundraising-Backlog, FR-ActiveMQ: Expire pending database entries at some point - https://phabricator.wikimedia.org/T143944#3471554 (Ejegg) Open>Resolved [18:10:52] mepps and how are you running phpunit now? [18:12:00] Fundraising-Backlog, Wikimedia-Fundraising-Banners, Epic, Spike: Spike: Investigate potential for banner impressions rewrite - https://phabricator.wikimedia.org/T131278#3471595 (Ejegg) [18:13:18] Fundraising Sprint Pretending This Isn't Happening, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM, FR-ActiveMQ, and 2 others: [Epic] Rewrite all queue clients to use a single shim library, improve library - https://phabricator.wikimedia.org/T133108#3471621 (Ejegg) [18:13:21] Fundraising Sprint Baudelaire Bowdlerizer, Fundraising Sprint Pretending This Isn't Happening, Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM, and 4 others: Migrate pending consumers to new queue and finish cleanup - https://phabricator.wikimedia.org/T131274#3471620 (Ejegg) Open>Resolv... [18:14:39] mepps I'mma grab some lunch, good luck! [18:40:54] mepps do you get the same errors if you just run vendor/bin/phpunit with no options? [18:44:27] nope i get a different error: EF..EEEEFEEEEFFFEEEFEE.EFF..EFE.E.E.......PHP Fatal error: Class 'SmashPig\Tests\TestingConfiguration' not found in /var/www/fr-tech/crm/sites/all/modules/queue2civicrm/tests/includes/TestingSmashPigDbQueueConfiguration.php on line 8 [18:45:13] i tried completely uninstalling and then re-enabling the modules but the db tables are still misisng [18:45:19] is there a script i can run to add them? [18:48:20] mepps did you composer update rather than just composer install? [18:48:47] looks like you have the new version of SmashPig [18:49:04] yup ran composer update [18:49:34] ok, this patch updates CRM to use the new version: https://gerrit.wikimedia.org/r/365069 [18:50:01] till that's merged, just 'composer install' to get the version that's in composer.lock [18:52:27] okay [18:55:28] But if you feel like reviewing that one, please go ahead! [18:59:54] cool, will take a look! [19:29:38] ejegg: that cron looked liked it deployed but is not on live [19:31:10] eileen1: oh shoot, let me take a look [19:31:23] thx [19:34:42] AndyRussG|a-whey: mepps meeting? [19:35:14] ahh right, tuesday [19:36:17] Fundraising-Backlog: Strict error message in logs (quick tidy up) - https://phabricator.wikimedia.org/T171560#3472019 (DStrine) p:Triage>Normal [19:39:07] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Update CiviCRM ganglia bits to point to Prometheus - https://phabricator.wikimedia.org/T171524#3472028 (DStrine) p:Triage>Normal [19:40:10] (CR) Mepps: [C: 2] Fix PayPal EC recurring profile created messages [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/367624 (https://phabricator.wikimedia.org/T171546) (owner: Ejegg) [19:41:16] (Merged) jenkins-bot: Fix PayPal EC recurring profile created messages [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/367624 (https://phabricator.wikimedia.org/T171546) (owner: Ejegg) [19:41:30] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Upstream setting to control email limit - https://phabricator.wikimedia.org/T171424#3472032 (DStrine) p:Triage>Normal [19:42:30] Fundraising Sprint Navel Warfare, Fundraising-Backlog, FR-PayPal-ExpressCheckout, Patch-For-Review, Unplanned-Sprint-Work: Are PayPal refunds for recurring donations incorrectly being tagged as EC or vice versa? - https://phabricator.wikimedia.org/T171351#3472034 (DStrine) [19:43:49] Fundraising Sprint Navel Warfare, Fundraising-Backlog, Patch-For-Review, Unplanned-Sprint-Work: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3472037 (DStrine) [19:46:46] Fundraising Sprint Navel Warfare, Fundraising-Backlog, Patch-For-Review, Unplanned-Sprint-Work: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3472049 (DStrine) [20:13:46] (PS1) Ejegg: Get rid of return-a-reference option [wikimedia/fundraising/SmashPig] (deployment) - https://gerrit.wikimedia.org/r/367747 (https://phabricator.wikimedia.org/T171560) [20:14:31] thanks for the CR mepps, I'll deploy that fix [20:15:39] (PS1) Ejegg: Get rid of return-a-reference option [wikimedia/fundraising/SmashPig] - https://gerrit.wikimedia.org/r/367748 (https://phabricator.wikimedia.org/T171560) [20:16:54] eileen1: oh, one thought - maybe we should run those omnimail jobs at different minutes-past-the-hour [20:17:37] ejegg: yeah possibly - I put to every 30 mins I think? [20:18:24] ok, so let's do the mailing load at :00 and :30 [20:18:32] and the recipient load at :10 and :40 [20:18:33] ? [20:20:00] pushed an update if you want to review before deploy [20:20:03] sounds good - although probably they will wind up being more opten [20:20:36] often [20:20:44] I was trying to keep down at first [20:21:11] good call [20:21:13] ejegg: is upgrading drush on live ops or is it in a repo? [20:21:24] eileen1: I think it's ops [20:21:45] cwd - does that sound right? [20:22:07] yeah, if it's in /usr/local/bin that's not something we can deploy [20:22:25] ok cool will try to assign https://phabricator.wikimedia.org/T171435 [20:22:42] let me check [20:23:01] Fundraising Sprint Gondwanaland Reunification Engine, Fundraising Sprint Homebrew Hadron Collider, Fundraising Sprint Ivory Tower Defense Games, Fundraising Sprint Judgement Suspenders, and 8 others: Drush not handling spaces in quotes / schedule Si... - https://phabricator.wikimedia.org/T171435#3472219 [20:24:48] eileen1: drush is a debian package [20:25:01] so yeah it would be an ops update [20:25:08] but if it's not updated upstream we'd have to build our own [20:25:41] oh but hold on [20:26:19] there's two of them [20:26:42] oh so when I just type drush I might be getting a diff one that the cron [20:27:14] ah i take it back [20:27:28] the thing in /usr/local/bin is a sudo wrapper [20:27:43] pretty sure everything uses the debian packGE [20:28:36] ok it seems to be quite an old version? [20:29:07] eileen1: I'd been meaning to scratch that 'by reference' itch for a while: https://gerrit.wikimedia.org/r/367747 [20:29:19] 5.10.0-2 [20:30:22] ejegg: that was nasty [20:30:46] yeah, still shaking out some of the weird cruft from the initial design [20:30:54] (and replacing it with different weird cruft!) [20:32:06] eileen1: i don't see drush in backports either [20:32:21] cwd - hm m latest is 8.1 https://github.com/drush-ops/drush [20:33:02] yeah debian stable is not known for being cutting edge [20:33:07] :-) [20:33:14] drush can install via composer [20:33:29] eileen1: drush8 is for drupal8, right? [20:33:45] oh hey, nope, it does drupal7 too [20:34:28] yep [20:34:37] I think we have 8 in our build kits [20:34:46] cwd how do you feel about this: https://github.com/webflo/drush-shim [20:34:56] got my first failmail :-) [20:35:24] (that's why it's only every 30 mins for now!) [20:36:20] oops, double drush [20:36:39] (jumprope style for drupal coders) [20:37:22] eileen1: I'm still logged in to the deploy box - want me to fix those? [20:37:42] ejegg: so that thing tries to use the drush in whatever path you are in, instead of the system one? [20:37:59] cwd oh, is that what it does? [20:38:00] ejegg: yes please [20:38:45] cwd the probs I had running the command was from command line - I didn't try the spaces via cron [20:39:07] eileen1: should be the same thing [20:39:22] yeah - if there is only one drush version [20:39:44] I don't know for sure it's a drush issue - but I don't have another idea [20:39:49] of what it could be [20:39:59] !log updated fundraising process-control to adb332586850d6d69fb83a823f6c5c0c53f229a6 [20:40:09] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [20:40:59] eileen1: drush seems likely, unless it's punting option handling to the sub-program [20:41:15] eileen1: looks like the recipient load is running! [20:42:20] I'm trying to run the mailing load and get those records in ahead of the recipients [20:42:42] ok, looks like that came back with a bunch of things! [20:43:10] and a bunch of Undefined index: number_unsuppressed Load.php:96 [20:43:24] but that was just a warning [20:45:03] ejegg: it doesn't matter from those jobs pov - they don't match their data at that stage [20:45:10] ah, got it [20:45:39] I wonder about those notices- hadn't seen them earlier [20:46:05] also recipient load is processing records back in Sep last year [20:46:15] I think it is grabbing a week at a time [20:46:30] oh yeah, no start date on that job [20:47:38] (PS1) Ejegg: Merge branch 'master' into deployment [wikimedia/fundraising/SmashPig] (deployment) - https://gerrit.wikimedia.org/r/367797 [20:47:42] (CR) Ejegg: [C: 2] Merge branch 'master' into deployment [wikimedia/fundraising/SmashPig] (deployment) - https://gerrit.wikimedia.org/r/367797 (owner: Ejegg) [20:47:57] ok - gonna do morning things - back soon [20:49:09] (Merged) jenkins-bot: Merge branch 'master' into deployment [wikimedia/fundraising/SmashPig] (deployment) - https://gerrit.wikimedia.org/r/367797 (owner: Ejegg) [20:53:49] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: Populate country column when creating c_t rows during offline import - https://phabricator.wikimedia.org/T171658#3472423 (Ejegg) [21:12:53] !log updated SmashPig from 523d6dddfb1d703213cdca12bc84b62299d8aebd to f4ca53ca303a7c9cbc4f0fd15d62116f0689b81d [21:13:02] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [21:14:16] Fundraising Sprint Navel Warfare, Fundraising-Backlog, Patch-For-Review, Unplanned-Sprint-Work: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3461897 (Ejegg) a:mepps [21:17:14] Fundraising Sprint Navel Warfare, Fundraising-Backlog, Patch-For-Review, Unplanned-Sprint-Work: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3472566 (Ejegg) @MBeat33 Thanks for bringing this to our attention! This is also related to the E-checks th... [21:22:58] (PS1) Ejegg: Add country to c_t rows created during imports [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) [21:26:36] (CR) jerkins-bot: [V: -1] Add country to c_t rows created during imports [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) (owner: Ejegg) [21:27:05] Fundraising Sprint Navel Warfare, Fundraising-Backlog, Patch-For-Review, Unplanned-Sprint-Work: Are we losing transactions witih repeated ct_id? - https://phabricator.wikimedia.org/T171349#3472640 (MBeat33) Thank you @Ejegg and @mepps !! [21:53:56] Fundraising Sprint Gondwanaland Reunification Engine, Fundraising Sprint Homebrew Hadron Collider, Fundraising Sprint Ivory Tower Defense Games, Fundraising Sprint Judgement Suspenders, and 8 others: retrieve the text/ html and statistics data for m... - https://phabricator.wikimedia.org/T161758#3472755 [21:56:19] Fundraising Sprint Gondwanaland Reunification Engine, Fundraising Sprint Homebrew Hadron Collider, Fundraising Sprint Ivory Tower Defense Games, Fundraising Sprint Judgement Suspenders, and 8 others: retrieve the text/ html and statistics data for m... - https://phabricator.wikimedia.org/T161758#3472761 [21:57:36] (PS1) Ejegg: Fix failing refund test [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367813 [22:06:01] Fundraising Sprint Loose Lego Carpeting, Fundraising Sprint Murphy's Lawyer, Fundraising Sprint Navel Warfare, Fundraising-Backlog, and 2 others: Add ability for MG to import to Primary address type - https://phabricator.wikimedia.org/T169025#3472825 (LeanneS) @Eileenmcnaughton thanks, the fixes... [22:10:15] PROBLEM - check_mysql on frdb2001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 1749 [22:11:03] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM, Continuous-Integration-Config: Find way to exclude php 5.4 files from vendor lint task - https://phabricator.wikimedia.org/T170641#3472852 (greg) [22:15:15] RECOVERY - check_mysql on frdb2001 is OK: Uptime: 1236170 Threads: 1 Questions: 32790922 Slow queries: 6781 Opens: 9067 Flush tables: 1 Open tables: 608 Queries per second avg: 26.526 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [22:15:26] (PS2) Ejegg: Add country to c_t rows created during imports [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) [22:18:58] eileen1: something having to do with location_type_id is messing up my crm phpunit tests locally, and I think maggie's too [22:19:17] If I restore a blank db and run tests, they work, but on the second run I get like 50 failures [22:19:53] let's see, looks like mostly the merge tests [22:19:54] (CR) jerkins-bot: [V: -1] Add country to c_t rows created during imports [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) (owner: Ejegg) [22:24:08] (Abandoned) Ejegg: Get rid of return-a-reference option [wikimedia/fundraising/SmashPig] (deployment) - https://gerrit.wikimedia.org/r/367747 (https://phabricator.wikimedia.org/T171560) (owner: Ejegg) [22:24:16] (CR) Ejegg: "recheck" [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) (owner: Ejegg) [22:25:06] ejegg: hmm - if I run the tests once & then rerun I get errors because a tag has been deleted - some cleanup is too agressive or a later test overwrites an earlier one [22:25:30] eileen1: but none of the location_type_id issues? [22:26:09] the crashes all seem to come in CRM/Dedupe/Merger.php(786): CRM_Dedupe_Merger::getRowsElementsAndInfo('595', '902', NULL) [22:26:11] well - not 100% sure [22:26:16] I can take a look [22:26:23] probably a test I wrote causing it! [22:26:24] thanks! [22:26:36] do you want to stick in a phab just to track [22:26:50] ok, sure [22:27:54] (CR) jerkins-bot: [V: -1] Add country to c_t rows created during imports [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) (owner: Ejegg) [22:29:18] Fundraising-Backlog, Wikimedia-Fundraising-CiviCRM: CiviCRM phpunit tests failing on second run - https://phabricator.wikimedia.org/T171680#3472919 (Ejegg) [22:30:40] (CR) Ejegg: "Weird, 10-second timeouts in random places. VM freezeups?" [wikimedia/fundraising/crm] - https://gerrit.wikimedia.org/r/367806 (https://phabricator.wikimedia.org/T171658) (owner: Ejegg)