[00:24:00] hey, will Special:PasswordReset work on Wikitech? [00:24:05] wit LDAP backend [00:24:16] or is a script involved [01:33:38] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<11.11%) [01:43:36] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [01:59:40] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<11.11%) [02:29:38] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [02:40:36] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<55.56%) [03:05:36] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [04:21:38] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<11.11%) [04:46:41] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [04:57:34] PROBLEM - Puppet failure on tools-exec-07 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [05:22:33] RECOVERY - Puppet failure on tools-exec-07 is OK: OK: Less than 1.00% above the threshold [0.0] [06:37:02] PROBLEM - Puppet failure on tools-webgrid-tomcat is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [06:42:53] PROBLEM - Puppet failure on tools-submit is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [07:02:02] RECOVERY - Puppet failure on tools-webgrid-tomcat is OK: OK: Less than 1.00% above the threshold [0.0] [07:06:01] 3Gerrit-Patch-Uploader: Gerrit Patch Uploader does not work because no space left of device - https://phabricator.wikimedia.org/T88517#1017184 (10Fomafix) Gerrit Patch Uploader still does not work. [07:12:53] RECOVERY - Puppet failure on tools-submit is OK: OK: Less than 1.00% above the threshold [0.0] [07:18:55] YuviPanda: if /tmp is not for tmp, then what is it for? [07:19:19] Cloning there is much faster [07:19:31] Our partitioning setup was terrible until like 2 days agi [07:19:33] Ago [07:20:01] legoktm: no, not currently, but should be easy-ish to add [07:21:13] So? There should be free space on /tmp and whoever is filling it should fix their stuff [07:21:37] Everything breaks when tmp is full [07:22:08] valhallasw`cloud: could you? :) [07:23:46] valhallasw`cloud: do you clean up /tmp after yourself? [07:23:49] Some tools don't [07:25:34] Yes, I do :-P [07:26:21] And i need /tmp for its io perf [07:27:27] Legoktm will take a look later [07:27:34] ty :D [07:27:44] Now on phone and still sleepy [07:27:54] * legoktm tucks valhallasw`cloud in [07:28:20] Noooooooo I need to get out soo. [07:28:41] ..ooooo warm and nice *ZZzzz* [07:49:07] valhallasw`cloud: I can clear out /tmp in a while [07:49:20] Am out eating food [07:49:48] I'll check if I'm the bad guy soonTM [08:38:20] YuviPanda: at least 3.3G was mine :< [08:39:11] 3Gerrit-Patch-Uploader: Gerrit Patch Uploader does not work because no space left of device - https://phabricator.wikimedia.org/T88517#1017343 (10valhallasw) Cleaned up 3.3G in temp files; should be OK now. Still need to figure out why the temp files were not removed, though... [08:39:30] valhallasw`cloud: hahaa [08:39:40] 3Gerrit-Patch-Uploader: Gerrit Patch Uploader does not reliably clean up after itself in /tmp - https://phabricator.wikimedia.org/T88517#1017344 (10valhallasw) p:5Unbreak!>3High [08:41:16] 3Gerrit-Patch-Uploader: Gerrit Patch Uploader does not reliably clean up after itself in /tmp - https://phabricator.wikimedia.org/T88517#1013871 (10valhallasw) Broken in https://github.com/valhallasw/gerrit-patch-uploader/commit/1b95efd3d009f4d70b29e87e3004b3370051b3bf#diff-e85f26108a7329f89aa2c7fb9d0f0476L272 [08:50:15] RECOVERY - Free space - all mounts on tools-webgrid-02 is OK: OK: All targets OK [08:54:36] legoktm: add only_new_files to the template and it shoudl work [08:54:42] famous last words -_-' [08:54:54] mm [08:54:58] let me make that match_new_files [08:55:16] "only_match_new_files" [08:55:40] (03PS2) 10Merlijn van Deen: hacky script to dump mysql subscriptions to redis [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/185644 [08:59:30] [13gerrit-patch-uploader] 15valhallasw pushed 1 new commit to 06master: 02http://git.io/b0Li [08:59:31] 13gerrit-patch-uploader/06master 14609d65c 15Merlijn van Deen: Make sure tempdir is removed [09:03:01] 3Gerrit-Patch-Uploader: Gerrit Patch Uploader does not reliably clean up after itself in /tmp - https://phabricator.wikimedia.org/T88517#1017378 (10valhallasw) 5Open>3Resolved a:3valhallasw Should be OK now. [09:39:34] valhallasw`cloud: :P [09:39:44] YuviPanda: what? :P [09:39:51] :P [09:40:00] the code was there, I just commented it at some point while debugging [09:40:13] and then you pep8'ed the commented code away! [09:41:02] hahahahaha [09:41:02] :D [10:18:14] 3Labs: Monitor that nfs-manage-volumes deamon is running on the NFS hosts - https://phabricator.wikimedia.org/T88664#1017483 (10yuvipanda) 3NEW a:3coren [11:11:48] 3Labs: Fix puppet code to make sure that manage-nfs-volumes deamon is running - https://phabricator.wikimedia.org/T88669#1017556 (10yuvipanda) 3NEW a:3coren [11:12:37] hi folks [11:12:48] did anyone notice any problem with enwiki database? [11:12:56] it's going terribly slowly to me this morning [11:12:56] hey marcmiquel [11:13:02] what queries are you executing? [11:13:07] a very simple one [11:13:12] looking for links to Pizza article [11:13:12] can you pastebin it? [11:13:23] http://quarry.wmflabs.org/query/1865 [11:14:11] marcmiquel: I think that one isn't hitting indexes. [11:14:16] try [11:14:28] select count(*) from enwiki_p.pagelinks where pl_namespace = 0 and pl_title="Pizza" and pl_from_namespace=0; [11:14:46] that returned in 0.01s [11:15:05] while leaving out the namespace makes it run forrreeevvvver [11:15:13] marcmiquel: this is because of the way mysql indexes work [11:15:39] i don't get it. i thought that only the origin was necessary [11:15:52] YuviPanda: well, mainly because the index is (pl_namespace, pl_title) and not the other way around [11:16:03] which is weird, because the other way around would be much more reasonable [11:16:04] right [11:16:33] but no-one thought much about that back in the days, I suppose [11:17:11] so...the reason why it didn't work is because of efficiency [11:17:25] i tried the same query before in catalan/italian/spanish and it did work [11:17:27] although maybe 'give me all links from X to pages in NS0' is a sensible question [11:17:45] marcmiquel: it's not that it doesn't /work/, it's that it's /inefficient/ [11:18:00] yeh, valhallasw`cloud. but it broke [11:18:05] it broke because it's inefficient [11:18:09] i guess [11:18:22] marcmiquel: well, it was killed because it took too long, yeah. [11:26:16] Hey guys, I have a simple question about API read requests. Is this the right place to ask it? [11:30:00] livnetata: might as well, but you are probably going to get more responses in #wikimedia-dev [11:31:15] YuviPanda: I'll try here and see how it goes. [11:32:22] Is sending about 200,000 read requests to the API is possible? or is it actually a terrible idea (which is my guess)? [11:32:46] ah [11:32:51] it sounds like a terrible idea at first :) [11:32:56] what are you going to be doing that for? [11:33:03] it should take around an hour I think [11:33:19] have you looked at using the dumps? [11:33:59] I need to work with revision text. I know exactly what I need and the other option is downloaded the en revision dumps which sounds like an even worse idea. [11:34:25] livnetata: if you are on toollabs, the dumps are available readymade on /public/dumps [11:35:05] As AS I understand, the text is not there... [11:35:19] livnetata: it is there in the dumps. [11:35:27] labs also has databases available [11:35:29] those don't have the text [11:35:30] but the dumps do [11:36:45] Ok, but then you have to download more than a Terabyte of data. I can't pinpoint what I need, right? [11:37:42] livnetata: if you are running your tool *on* toollabs (tools.wmflabs.org, which provides shell / other tools for volunteers), you don't need to download anything [11:37:48] you just run your code there, and read from /public/dumps [11:37:52] which has the dump XML files [11:37:57] you will have to process them all, yeah [11:38:02] but there are utilities that do that for you arlead. [11:38:17] like https://github.com/halfak/mediawiki-utilities [11:38:24] Yeah, I know it. [11:38:26] Thanks [11:38:26] livnetata: also, if you use the API, we ask that you do not use multithreaded hits. [11:38:33] So you think that's the better option? [11:38:38] so then you have to query one at a time, and I highly doubt it would complete in an hour [11:38:42] yes, I strongly thing using the dumps [11:38:44] is a better idea [11:40:23] ok. YuviPanda, Thanks for your help! [11:40:27] yw! [11:41:56] !log deployment-prep disabled puppet on mediawiki01, 02, jobrunner01, bastion and salt [11:42:02] Logged the message, Master [11:42:07] _joe_: ^ odne [11:42:08] *done [11:42:32] <_joe_> so you have to define two variables in hiera [11:42:38] <_joe_> mediawiki::users::web [11:42:59] <_joe_> and beta::syncsiteresources::user [11:43:11] yeah, doing now. [11:43:15] to www-data, I guess [11:43:20] <_joe_> yep [11:44:00] done [11:44:51] !log deployment-prep deleted mediawiki03 instance, holdover from security testing from long, long ago [11:44:55] Logged the message, Master [11:45:25] <_joe_> mmmm [11:45:40] <_joe_> I hope we remembered everything [11:46:06] _joe_: https://phabricator.wikimedia.org/T78076#835309 is what bd808 did when he did the earlier renumbering [11:46:09] so we seem to be doing 'em all [11:46:37] we technically could depool the servers, btw. varnish doesn't send requests to a backend if it is down [11:46:47] <_joe_> yea we will do that [11:47:42] <_joe_> we should exclude the nfs filesystems from our file search [11:47:58] _joe_: hmm, is changing *all* apache users to www-data a good idea? [11:48:05] I guess apache will have other things that should remain apache? [11:48:30] <_joe_> oh bryan did [11:48:36] <_joe_> YuviPanda: not really [11:49:21] oh [11:49:22] <_joe_> YuviPanda: https://phabricator.wikimedia.org/P262 [11:49:24] right [11:49:26] apache is ours [11:50:01] that seems good [11:50:09] <_joe_> ok I'll do mediawiki01, you pick something non-mediawiki :) [11:50:24] _joe_: heh, let me do bastion. [11:50:36] <_joe_> bastion? [11:51:23] that's the deployment host [11:51:26] tin equivalent [11:51:45] <_joe_> oh ok [11:51:51] <_joe_> did you disable puppet there? [11:52:25] yeah [11:52:27] <_joe_> !log deployment-prep converting the web user to www-data [11:52:31] Logged the message, Master [11:52:34] <_joe_> and do the hiera change [11:53:21] !log deployment-prep running git-sync-upstream on deployment-salt to pick up latest ops/puppet changes [11:53:24] Logged the message, Master [11:53:35] _joe_: rebase conflict :D moment [11:53:54] from the local hack patch [11:54:03] <_joe_> uh my script is wrong [11:54:06] ok, fixed now [11:54:23] <_joe_> don't start apache and hhvm before having re-run puppet [11:55:01] <_joe_> mmmh [11:55:30] <_joe_> wait :) [11:55:35] _joe_: y'know, I think two of us doing this is dangerous. I'm going to stop now, and just let you do things. we have time. [11:58:05] * YuviPanda waits [11:58:23] <_joe_> ok now instructions should be good [11:59:04] I'm still going to wait until you've successfully migrated mediawiki01 :) [11:59:31] <_joe_> ok [11:59:40] <_joe_> did you prepare puppet+ hiera? [12:00:16] <_joe_> yes I see [12:00:55] _joe_: yeah, I made the hiera patch on wikitech and also disabled puppet [12:01:11] > deployment-prep disabled puppet on mediawiki01, 02, jobrunner01, bastion and salt [12:01:15] <_joe_> and merged production in the puppetmaster? [12:01:21] _joe_: yup. [12:01:39] <_joe_> otice: /Stage[main]/Hhvm/File[/run/hhvm/cache/cli.hhbc.sq3]/owner: owner changed 'www-data' to 'apache' [12:01:42] <_joe_> mmmh [12:01:47] <_joe_> how is that possible? [12:01:53] hmm, deployment-salt's puppetmaster is at [12:01:53] beta: allow defining the web user. [12:02:08] _joe_: hmm, I've a suspicion [12:02:15] _joe_: which is that mwyaml backend doesn't work anymore [12:02:21] let me test that suspicion [12:02:22] <_joe_> why? [12:02:31] <_joe_> why shouldn't it work? [12:02:51] _joe_: try puppet again on mediawiki01? [12:03:16] _joe_: not sure, but a few days ago had other strange things happen on toollabs that could potentially be explained away by mwyaml not working, but was too late and did not debug. [12:03:38] <_joe_> ok so it may not work [12:03:59] <_joe_> maybe our well thought-out change to the labs hiera file wasn't so well thought [12:04:00] _joe_: I just added the appropriate hiera things to the deployment-prep/common.yaml file [12:04:02] on deployment-salt [12:04:09] <_joe_> ok good [12:04:14] _joe_: so if mediawiki01 works fine now, then we know mwyaml is the culprit [12:06:18] <_joe_> YuviPanda: lgtm [12:06:22] <_joe_> yes it is indeed [12:06:27] <_joe_> we'll fix it [12:06:36] <_joe_> we broke it, then we fix it [12:07:05] yup [12:07:08] <_joe_> ok, doing mediawiki02 [12:07:18] cool. let me commit it so it isn't a local hack [12:07:35] <_joe_> leave the hack for 5 minutes man [12:07:37] <_joe_> relax :) [12:08:06] <_joe_> actually, you could start to change the owner of images, or do the other hosts [12:08:24] <_joe_> On the jobrunner, you need to do service jobrunner stop as well [12:09:07] _joe_: the hack will be killed in about 20mins, becuase I 'fixed' the auto-rebaser script to throw away uncommited local changes :P [12:10:01] <_joe_> meh [12:10:08] !log deployment-prep stopped jobrunner on jobrunner01 [12:10:11] <_joe_> now you know why :) [12:10:12] Logged the message, Master [12:12:04] _joe_: commited and fixed :) [12:12:09] _joe_: I'm doing a chown of the images now. [12:12:16] <_joe_> ok [12:12:37] <_joe_> of course remove the -prune after -fstype [12:12:45] <_joe_> and the -o and parenthese [12:13:06] _joe_: I'm just doing [12:13:10] sudo chown -R www-data:www-data upload7/ [12:13:16] <_joe_> uh [12:13:21] <_joe_> that brutal? :P [12:13:23] <_joe_> ok [12:13:32] <_joe_> should work [12:13:48] !log deployment-prep running time sudo chown -R www-data:www-data upload7/ on /data/project [12:13:52] Logged the message, Master [12:14:30] <_joe_> ok the two mediawikis are done [12:14:35] wheeee [12:14:39] <_joe_> what's next for me? [12:14:50] <_joe_> trust me, there will be something that breaks [12:15:03] _joe_: :) deployment-bastion? [12:15:06] that's the tin equivalent [12:15:06] <_joe_> some obscure script that has "apache" hardcoded [12:15:12] <_joe_> ok I'll take it [12:15:26] <_joe_> we should check the mediawiki repo as well, in fact [12:15:47] _joe_: yedah [12:15:48] *yeah [12:15:54] _joe_: I'm doing jobrunner now [12:15:58] <_joe_> ok [12:16:03] <_joe_> I'm doing bastion [12:16:08] cool [12:17:13] <_joe_> oh remember to send an email to releng people, apart from commenting on phab [12:21:36] <_joe_> YuviPanda: which host is the maintenance host? [12:21:41] <_joe_> the one running crons [12:21:56] _joe_: deployment-bastion, I think? [12:22:05] <_joe_> mmmh k [12:22:10] I don't know if we're actually running crons, but if we are, they're there [12:22:11] <_joe_> I don't think so, no [12:23:10] <_joe_> # HEADER: This file was autogenerated at Tue Jun 03 21:58:47 +0000 2014 by puppet. [12:23:13] <_joe_> sigh [12:23:41] cool, jobrunner done [12:24:38] <_joe_> YuviPanda: check /var/spool/cron/crontabs/ [12:24:45] of jobrunner? [12:24:46] <_joe_> just to be sure [12:24:54] <_joe_> yeah on any host you're converting [12:25:19] 0 19 * * * /usr/bin/find /tmp -name "perf-*" -not -cnewer /run/hhvm/hhvm.pid -delete > /dev/null 2>&1 [12:25:24] that seems ok [12:26:01] _joe_: I wonder if I should've done the chown of upload7 on NFS directly on the NFS host [12:26:05] would've been faster, perhaps [12:26:07] it's still running [12:26:14] <_joe_> YuviPanda: I assumed you did :) [12:26:20] I'm an idiot, of course :) [12:26:22] <_joe_> it's 1000x faster [12:26:33] * YuviPanda does [12:26:35] yeah [12:26:35] <_joe_> you're just a bit a newbie [12:26:40] <_joe_> now you know :) [12:26:45] yeah [12:26:59] the -cnewer with pid is also a really nice trick I wasn't aware of before [12:27:04] <_joe_> NFS is particularly bad [12:27:17] <_joe_> at stat() calls [12:27:29] <_joe_> well, it's particularily bad in general. [12:27:51] <_joe_> but! whenever you need a shared filesystem, youll find they are all generally worse [12:27:59] leaky abstraction [12:28:00] I guess [12:28:11] !log deployment-prep killed chown on deployment-bastion, running direclty on NFS server [12:28:16] Logged the message, Master [12:28:32] <_joe_> deployment-mediawiki02.eqiad.wmflabs is not letting me in, wtf? [12:29:14] <_joe_> are you able to log in? [12:29:22] <_joe_> I guess some ldap hiccup? [12:29:24] _joe_: hmm [12:29:37] Feb 5 12:28:41 deployment-mediawiki02 sshd[27477]: error: Unsafe AuthorizedKeysCommand: bad ownership or modes for directory / [12:29:40] Feb 5 12:28:41 deployment-mediawiki02 sshd[27477]: Authentication refused: bad ownership or modes for directory / [12:29:48] Feb 5 12:28:41 deployment-mediawiki02 sshd[27477]: Failed publickey for oblivian [12:29:52] <_joe_> uhm did I fuck something up? [12:29:55] _joe_: I already have a connection open there [12:30:00] <_joe_> look at ownership of / [12:30:24] <_joe_> and set it to root:root in case [12:30:53] _joe_: yeah, set it to root:root, works now [12:31:10] <_joe_> mh what did I do? meh [12:31:14] things like this make me distrust my ability to do find [12:31:19] mediawiki01 seems fine [12:31:43] <_joe_> yeah, no, see lastcomm [12:31:48] _joe_: uh, I didn't actually look at ownership, just reset it :|_ [12:32:00] I also am unsure how to look at ownership of / itslef? [12:32:14] sorry [12:32:21] now we don't know what it was owned as [12:32:50] <_joe_> ls -lad / [12:32:59] <_joe_> YuviPanda: I know, sadly [12:33:23] hmm? [12:33:32] you know that we don't know what / was owned as, or you know what went wrong? [12:33:38] <_joe_> find / -fstype nfs -prune -o \(-user apache -exec chown www-data {} \+ \) vs find / -fstype nfs -prune -o \( -user apache -exec chown www-data {} \+ \) [12:33:58] <_joe_> I corrected it after one second [12:34:02] <_joe_> but well, whatever [12:34:05] aaah, I see. [12:34:06] right. [12:34:35] if someone as experienced as you sometimes mess up find + -exec, I should be *really* careful :) [12:34:48] <_joe_> well find is my enemy [12:34:55] <_joe_> and my swiss knife [12:34:58] :D [12:35:14] <_joe_> but, lemme fix things in prod now [12:35:26] _joe_: hmm, so beta is 'done'? [12:36:07] <_joe_> YuviPanda: yes [12:36:14] \o/ [12:36:15] well [12:36:20] except for /data/project/upload7 [12:36:22] which is still ongoing [12:36:24] <_joe_> yes [12:36:29] ah [12:36:29] done [12:36:31] 7mins [12:36:32] not bad [12:36:58] whee, thumbnailing at least works http://commons.wikimedia.beta.wmflabs.org/wiki/File:Het_aanreiken_van_een_brief_in_een_voorhuis-SA_7515.jpg [12:37:40] _joe_: I'll wait for the releng team to wake up to see if we've broken anything [12:37:59] <_joe_> well Reedy and hashar are around I guess [12:38:09] gooood morning or whatever [12:38:22] <_joe_> in yuvi's case, evening :) [12:38:23] YuviPanda: I am more or less there. Taking care of sick kids today [12:38:32] :D [12:38:32] <_joe_> for us it's almost lunchtime [12:38:33] but they are having a nap right now so I have some time [12:38:40] * hashar had lunch already [12:38:52] hashar: whee! we just futzed around and moved things a lot in beta, and nothing seems to have broken to my knowledge / limited testing [12:39:01] <_joe_> both sick? classic [12:39:13] YuviPanda: browser tests will complain I guess [12:39:17] <_joe_> "the main page renders" is our debug [12:39:27] _joe_: I checked with salt. videoscaler01 is still running apache user [12:39:27] YuviPanda: you might want to post on the qa list to announce whatever you did [12:39:31] but videoscaler01 has never worked... [12:39:35] <_joe_> YuviPanda: oh, right [12:39:37] hashar: yeah, we're keeping a phab ticket up to date, I'll post in a bit. [12:39:47] _joe_: but videoscaler never worked on beta, afaik. [12:40:04] qa list has devs from various devs team, so that is a good audience for beta cluster related work [12:40:30] hashar: oh wait, the qa list is different from the releng list? [12:40:36] yup [12:40:40] releng is mostly for the team [12:40:43] ah, heh [12:40:44] right [12:40:45] qa is for the team and others [12:40:47] yeah confusing [12:42:18] _joe_: I'm taking care of videoscaler anyway, btw [12:43:39] (03Abandoned) 10JanZerebecki: Add labs ssl key for unified.wikimedia.org [labs/private] - 10https://gerrit.wikimedia.org/r/134852 (owner: 10JanZerebecki) [12:45:27] <_joe_> WTF???? [12:45:47] aha, scap has -u apache hardcoded [12:45:49] lol [12:45:52] * YuviPanda fixes [12:48:44] !log deployment-prep cherry-picking https://gerrit.wikimedia.org/r/188798 on scap on deployment-prep [12:48:47] Logged the message, Master [13:08:50] 3Wikimedia-Labs-Other, operations: (Tracking) Database replication services - https://phabricator.wikimedia.org/T50930#1017835 (10Merl) [13:12:51] 3Wikimedia-Labs-Other, operations: (Tracking) Database replication services - https://phabricator.wikimedia.org/T50930#1017846 (10Krenair) [13:40:33] Is there a way to retrieve installed extensions for an wiki using the replica databases? [14:14:37] DrSkyLizard: maybe in the meta-database. Just asking the api is probably the best option. [14:21:04] DrSkyLizard: No, that data comes from the config files and not the DB. valhallasw`cloud has the right of it that the easiest way is to poke the API [14:32:52] 3operations, Labs: Rename specific account in LDAP, Wikitech, Gerrit and Phabricator - https://phabricator.wikimedia.org/T85913#1017943 (10yuvipanda) Sorry for the delayed response. I don't know if we have done this before (@andrew?), but it's trivial to do. However, because we have never done this before, I do... [14:34:46] YuviPanda: Hm. Do we have a mechanism for passive checks? I'd rather had a heartbeat of some sort to manage-nfs-volumes than just look if the process is there (because that's only part of the equation) [14:35:23] Coren: I suppose there might be? I'm not sure. [14:35:33] Coren: this would hit icinga though, and not shinken, so we have more options [14:35:48] Ah, true. That does simplify things. [14:36:11] I think right now our "check if X is up" checks are mostly of the ps|grep sort though. [14:36:30] right [14:36:34] that'd still be better than nothing [14:37:12] It would. [15:17:34] 3Labs: New labs instances not being allowed to mount anything from NFS - https://phabricator.wikimedia.org/T88527#1017986 (10coren) [15:17:35] 3Labs: Monitor that nfs-manage-volumes deamon is running on the NFS hosts - https://phabricator.wikimedia.org/T88664#1017984 (10coren) 5Open>3Resolved Icinga now keeps an eye on it: https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=2&host=labstore1001&service=manage_nfs_volumes_running [15:18:23] 3Labs: New labs instances not being allowed to mount anything from NFS - https://phabricator.wikimedia.org/T88527#1014054 (10coren) [15:18:26] 3Labs: Fix puppet code to make sure that manage-nfs-volumes deamon is running - https://phabricator.wikimedia.org/T88669#1017987 (10coren) 5Open>3Resolved Now ensure => running [15:43:29] 3operations, Labs: Rename specific account in LDAP, Wikitech, Gerrit and Phabricator - https://phabricator.wikimedia.org/T85913#1018041 (10Chad) We have done it before, we have docs for it (see my previous comment). You need 3 separate accesses to make this change (meaning only a root can do all of it): * LDAP... [15:48:27] 3Labs: New labs instances not being allowed to mount anything from NFS - https://phabricator.wikimedia.org/T88527#1018064 (10coren) [15:48:30] 3Labs: Make sure that manage-nfs-volumes-deamon can not run as root - https://phabricator.wikimedia.org/T88579#1018062 (10coren) 5Open>3Resolved Rather than guard against root being used, I added a guard so that only nfsmanager can be. Safer this way. [15:49:33] hi guys [15:49:47] i've got a question about the database [15:50:00] Do you know why the number of revisions from revision table does not correspond to the number of revisions per language in the meta page "List of Wikipedias" being it lower? [15:50:53] marcmiquel: There are a number of reasons for this, the biggest one being suppressed edits. [15:51:05] marcmiquel: Also, not counting exactly the same way. [15:52:13] I see... [15:52:24] but many less in the revs table! [15:52:28] even counting discussions and user pages [15:53:13] 3Labs: Puppetize & fix tools-db - https://phabricator.wikimedia.org/T88234#1018067 (10coren) p:5Triage>3High [15:55:49] 3Wikimedia-Labs-Infrastructure, Labs: No init script for idmapd on labs jessie instances - https://phabricator.wikimedia.org/T87309#1018076 (10coren) [15:55:50] 3Labs: Labs NFSv4/idmapd mess - https://phabricator.wikimedia.org/T87870#1018077 (10coren) [15:56:41] 3Wikimedia-Labs-Infrastructure, Labs: Debian Jessie image for Labs - https://phabricator.wikimedia.org/T75592#1018084 (10faidon) [15:56:44] 3Wikimedia-Labs-Infrastructure, Labs: No init script for idmapd on labs jessie instances - https://phabricator.wikimedia.org/T87309#1018082 (10faidon) 5duplicate>3Open This isn't a duplicate, this is a separate issue. [15:57:38] 3Labs: virt1002/labstore1001 network exhaustion - https://phabricator.wikimedia.org/T84003#1018087 (10coren) 5Open>3Resolved This simply requires hunt-and-seek of outliers as they occur. [16:01:19] another question [16:01:36] I see there are bots in the user group bot but not all of them use this flag [16:01:49] others use 'Bot' inside their nickname and that's it [16:02:05] but some include bot in their name because it's their real name like Cabot (a surname) [16:02:52] anyone knows how to approach a mw to get the most trustworthy list of bots possible? [16:06:53] marcmiquel: There really isn't a way - the different projects all have different rules about how to flag bots (or not). [16:07:22] marcmiquel: enwiki, for instance, will not give the bot flag to some bots on purpose (so that their edits show in recentchanges) [16:07:40] there are more people using 'bot' in their nicknames than flags [16:07:53] marcmiquel: And while enwiki /does/ have a naming rule, it only says "Have 'bot' somewhere in the name" and most other projects have no such requirement [16:08:01] also the 'bot' flag does not force a bot to mark its edits as default [16:08:33] mark its edits as 'bot'* [16:08:44] just gives it the ability to do so [16:08:52] that's quite messy to really know how many edits are done by bots in each project, right? [16:09:24] marcmiquel: Indeed. You can only roughly approximate the lower bound. [16:10:32] I see. still the best approach is to see their names [16:15:50] For enwiki, maybe. I wouldn't expect that to hold accross languages. [16:19:33] Coren: do you know any other language with different rules than including 'bot' in their nicknames? [16:21:52] marcmiquel: Most projects don't even /have/ specific bot rules, and many that do have the *Bot thing as a recommendation only. At first glance, about 10% of frwiki bots don't have it in their names at all, and a couple have it in the middle. [16:22:48] Other projects can be more fun. Maybe 사용자:제로의 사역봇 says "robot" in it, but you wouldn't know (and it /is/ a bot on kowiki) [16:26:05] Same with the meta bot policy that recommends 'bot' but does not mandate it. [16:26:27] So yeah, relying on the name is only going to get you an approximative lower bound [16:27:30] name ∪ bot flag name ∪ bot flag ∪ global bot flag might get close. [16:29:16] wow [16:29:39] i guess a table of translations would be too much [16:30:28] The joys of having a federation of independent projects. :-P [16:31:01] yeh [16:31:57] although i don't think there's an implicit connection between the foundation governance and this thing [16:32:05] wikidata is a common project which works very well [16:36:25] 3Wikimedia-Fundraising-CiviCRM, Labs, Wikimedia-Fundraising: Create new labs project: fundraising-integration - https://phabricator.wikimedia.org/T88599#1018187 (10awight) Please hold one moment... @hashar is suggesting we create this instance as integration-civicrm01.eqiad.wmflabs, possibly under the integratio... [16:40:55] and then there's people who have a nickname which inadvertently contains 'bot' [16:41:37] and there's the question of what a bot really /is/. If someone replaces text page for page with ctrl-f? If someone does the same thing, but with AWB? If someone does the same thing, but with pywikibot....? [16:44:01] bot could be defined by the frequency of editing which makes it impossible to be human :) [16:45:10] Maybe. But on nlwiki, we have a bot to update the nominated-for-deletion page daily, which does one edit every day ;-) [16:45:39] (however, all these edge cases probably don't matter for your end goal, which would be a reasonable guess, and maybe how that value changes in time( [16:46:47] yes, right, i resign to have a rough result i guess [17:25:37] Wikimedia Foundation Error Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes. [17:26:02] XXN : Wikimedia sites are down due power shortage.. [17:26:18] And should be fixed in couple of minutes. [17:58:46] having some trouble with node_modules on tool labs. any one to help? [18:00:11] our node_modules folder works locally, but not when pushed onto the server [18:00:14] http://pastebin.com/0nsRMwAP [18:01:26] Error: Module version mismatch, refusing to load. [18:01:36] any help on where the problem could lie? [18:05:00] could it be because node on the server is v0.8.2? [18:38:24] Coren: you around? [18:42:39] Coren: how do I add a new queue to OGE? [18:43:48] YuviPanda: qconf; easiest is with -aq if you already know exactly what settings you want. [18:44:11] Coren: right, so I want a generic-webgrid queue. can you set one up? [18:44:13] YuviPanda: Also -Aq if you have a file with your settings instead (that's what I usually do) [18:44:27] Coren: hmm, so I dump with -sq, change params, and do -Aq? [18:44:39] Coren: and then add hosts usual way? -se and -Ae? [18:45:30] YuviPanda: -sq has extra output that confuses -Aq (yes, oge sucks) [18:45:47] Coren: right, so it is whack a mole of remove things one by one? [18:45:52] Coren: also, why doesn't the puppet code automatically set this up? [18:45:59] what does the puppet queues code even do? [18:46:31] YuviPanda: It's done mostly/partly. It creates the config files atm, but I didn't want to start actually applying them until I had codfw to test - which isn't going to happen anytime soon now. [18:46:42] Coren: yeah, so I think we should finish that up now. [18:48:25] YuviPanda: That would be doubleplusgood. There are still a few tweaks needed in the template files (there's a few necessary lines missing last I tested) but it's fairly simple to turn the whole thing on once we're happy with the result. Look at /data/project/.system/gridengine/etc/ this is where the puppet stuff places things. [18:48:44] Hm. I notice the existence of /data/project/.system/gridengine/etc/queues/webgrid-generic there [18:49:21] Coren: yeah, it was already added to the tomcat nodes, and I added a new role as well [18:49:26] so it's present in puppet, just not in OGE [18:49:54] YuviPanda: Now, if the templates were completely bugfree, you could just qconf -Aq /data/project/.system/gridengine/etc/queues/webgrid-generic [18:50:14] (Which puppet is ready to do, but things are currently substituted with "echo qconf" instead) [18:50:46] Coren: right. [18:51:13] YuviPanda: The templates are missing a few mandatory lines atm, suspend_thresholds for one. That should be trivial-ish to fix [18:51:32] (brb, tea) [18:51:33] Coren: hmm, think we can fix them now and have generic-webgrid be added fully by puppet? :) [18:59:13] YuviPanda: Shouldn't be too long. We have to start with the hosts stuff first though. Wanna open up a phab while I look at what's missing? [18:59:25] Coren: sure! [18:59:32] Coren: let me open up a phab tracking ticket... [19:00:28] 3Tool-Labs: Fully puppetize Grid Engine (Tracking) - https://phabricator.wikimedia.org/T88711#1018656 (10yuvipanda) 3NEW a:3coren [19:00:31] Coren: ^ [19:13:49] 3Tool-Labs: Puppetize adding new hosts to OGE - https://phabricator.wikimedia.org/T88712#1018701 (10yuvipanda) 3NEW a:3coren [19:14:40] 3Tool-Labs: Puppetize adding a host to a particular queue - https://phabricator.wikimedia.org/T88713#1018712 (10yuvipanda) 3NEW a:3coren [19:14:48] Coren: ^ I'm adding blocking tasks [19:22:36] 3Labs: Make sure tools-db is backed up in some form - https://phabricator.wikimedia.org/T88716#1018754 (10yuvipanda) 3NEW a:3coren [19:23:49] 3Labs: Make sure tools-db is replicated somewhere - https://phabricator.wikimedia.org/T88718#1018775 (10yuvipanda) 3NEW a:3coren [19:31:39] 3Labs: Fix documentation for labs NFS - https://phabricator.wikimedia.org/T88723#1018838 (10yuvipanda) 3NEW a:3coren [19:48:27] 3Labs: New labs instances not being allowed to mount anything from NFS - https://phabricator.wikimedia.org/T88527#1018930 (10yuvipanda) (Subtasks filed to make this set up more robust) [19:56:13] 3Labs: Fix documentation & puppetization for labs NFS - https://phabricator.wikimedia.org/T88723#1018957 (10yuvipanda) [20:04:48] 3Tool-Labs: Fully puppetize Grid Engine (Tracking) - https://phabricator.wikimedia.org/T88711#1019015 (10scfc) In general I would like this to move from the filesystem-based stuff to hiera to keep it simple. The ping-pong where one instance writes something to the filesystem and then another is very cool :-), b... [20:04:53] 3Tool-Labs: Document our GridEngine set up - https://phabricator.wikimedia.org/T88733#1019016 (10yuvipanda) 3NEW a:3coren [20:16:54] 3Tool-Labs: Puppetize adding a host to a particular queue - https://phabricator.wikimedia.org/T88713#1019053 (10coren) At this time, it looks like configuration for nodes and queues is generated correctly (in /data/project/.system/gridengine/etc), but it is not applied automatically - the qconf statements are st... [20:17:43] 3Tool-Labs: Puppetize adding new node to OGE - https://phabricator.wikimedia.org/T88712#1019057 (10coren) [20:19:50] 3Tool-Labs: Puppetize adding new node to OGE - https://phabricator.wikimedia.org/T88712#1019063 (10coren) New node config is generated properly but not applied automatically (can be done with a qconf -Ae ) Once we are confident about turning on qconf from puppet, this should be automatic. There remai... [20:20:05] PROBLEM - Puppet failure on tools-exec-04 is CRITICAL: CRITICAL: 12.50% of data above the critical threshold [0.0] [20:20:59] PROBLEM - Puppet failure on tools-webgrid-04 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:21:18] YuviPanda|zzz: I'm not sure I want to worry all that much about the service restart anyways - it's a once-per-node thing, at the time of addition, and will be fixed in the worst case by a restart of the instance. [20:21:51] PROBLEM - Puppet failure on tools-shadow is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:21:56] Gak. [20:23:57] PROBLEM - Puppet failure on tools-submit is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:24:21] PROBLEM - Puppet failure on tools-exec-10 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:25:25] PROBLEM - Puppet failure on tools-exec-cyberbot is CRITICAL: CRITICAL: 62.50% of data above the critical threshold [0.0] [20:25:37] PROBLEM - Puppet failure on tools-exec-01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:26:55] PROBLEM - Puppet failure on tools-exec-05 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:27:45] PROBLEM - Puppet failure on tools-exec-09 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:28:24] PROBLEM - Puppet failure on tools-exec-03 is CRITICAL: CRITICAL: 37.50% of data above the critical threshold [0.0] [20:29:05] Coren: bah sorry [20:29:11] Thanks for fixing [20:29:23] Poop occurs. :-) [20:29:57] That's going to get increasingly "fun" as the number of dists we have increases. [20:30:10] PROBLEM - Puppet failure on tools-exec-11 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:30:28] PROBLEM - Puppet failure on tools-exec-06 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:30:46] PROBLEM - Puppet failure on tools-master is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:34:40] PROBLEM - Puppet failure on tools-exec-wmt is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:34:43] Coren: it shouldn't. I think we should slowly deprecate precise (definitely no new precise hosts). [20:35:13] I also have some ideas about service manifests etc (some from being part of soa for prod discussions) that might make things simpler [20:35:21] The keyword here being "slowly". I don't think we want to force the tool maintainers to fiddle their things to work under Trusty forcibly for quite some time. [20:35:30] Coren: also I'm going on vacation for weeks! [20:35:42] PROBLEM - Puppet failure on tools-webgrid-02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [20:35:42] I heard. Rest well. [20:36:08] Coren: oh true. But all of magnus tools are on trusty now. I co ordinated with him and spent 4h moving them myself :) well worth 4h I think [20:36:24] 3WMF-Legal, Tool-Labs: Set up process / criteria for taking over abandoned tools - https://phabricator.wikimedia.org/T87730#1019093 (10coren) After further discussion with Legal, that's the general process we'll be using. I'd ad-hoc something for the immediate issue, but will start discussion on Meta soon about... [20:39:44] 3WMF-Legal, Tool-Labs: Set up process / criteria for taking over abandoned tools - https://phabricator.wikimedia.org/T87730#1019096 (10Cyberpower678) >>! In T87730#1019093, @coren wrote: > After further discussion with Legal, that's the general process we'll be using. I'd ad-hoc something for the immediate issu... [20:43:26] 3WMF-Legal, Tool-Labs: Set up process / criteria for taking over abandoned tools - https://phabricator.wikimedia.org/T87730#1019099 (10coren) >>! In T87730#1019096, @Cyberpower678 wrote: > Does this mean I can have access to the wikiviewstats now? Once I checked that there are no credentials in the account that... [20:45:02] RECOVERY - Puppet failure on tools-exec-04 is OK: OK: Less than 1.00% above the threshold [0.0] [20:45:59] RECOVERY - Puppet failure on tools-webgrid-04 is OK: OK: Less than 1.00% above the threshold [0.0] [20:46:53] RECOVERY - Puppet failure on tools-shadow is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:59] RECOVERY - Puppet failure on tools-submit is OK: OK: Less than 1.00% above the threshold [0.0] [20:50:23] RECOVERY - Puppet failure on tools-exec-cyberbot is OK: OK: Less than 1.00% above the threshold [0.0] [20:50:41] RECOVERY - Puppet failure on tools-exec-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:51:25] 3WMF-Legal, Tool-Labs: Set up process / criteria for taking over abandoned tools - https://phabricator.wikimedia.org/T87730#1019127 (10Technical13) >>! In T87730#1019099, @coren wrote: >>>! In T87730#1019096, @Cyberpower678 wrote: >> Does this mean I can have access to the wikiviewstats now? > > Once I checked... [20:51:55] RECOVERY - Puppet failure on tools-exec-05 is OK: OK: Less than 1.00% above the threshold [0.0] [20:52:45] RECOVERY - Puppet failure on tools-exec-09 is OK: OK: Less than 1.00% above the threshold [0.0] [20:53:19] RECOVERY - Puppet failure on tools-exec-03 is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:19] RECOVERY - Puppet failure on tools-exec-10 is OK: OK: Less than 1.00% above the threshold [0.0] [20:55:12] RECOVERY - Puppet failure on tools-exec-11 is OK: OK: Less than 1.00% above the threshold [0.0] [20:55:30] RECOVERY - Puppet failure on tools-exec-06 is OK: OK: Less than 1.00% above the threshold [0.0] [20:59:34] RECOVERY - Puppet failure on tools-exec-wmt is OK: OK: Less than 1.00% above the threshold [0.0] [21:00:42] RECOVERY - Puppet failure on tools-webgrid-02 is OK: OK: Less than 1.00% above the threshold [0.0] [21:00:46] RECOVERY - Puppet failure on tools-master is OK: OK: Less than 1.00% above the threshold [0.0] [21:17:37] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<11.11%) [21:22:37] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [21:47:22] hi [21:48:43] I''m unable to see my edit count. The X! tools Edit Counter is showing two sections to be filled up by me: [21:48:52] 1. Usrername [21:48:58] 2. Wiki [21:49:18] But whenever I enter my user name and Wiki, nothing happens. [21:51:03] Coren: I feel maintainer-non maintainer is a bit of a false dichotomy, but I'm struggling a bit with explaining the point cleary enough for the bug. Basically, there is a group of people that I would feel comfortable handing over control to, but that doesn't mean they necessarily are a maintainer now, or would be in that future. [21:51:47] It probably gets easier if you define "maintainer" as "make that thing run" :-) [21:52:22] Coren: well, I wouldn't expect them to 'make that thing run', I'd expect them to be able to find a good new maintainer [21:52:23] Can you please tell me what are you talking about? What maintainer? [21:52:40] RD_: meta-discussion on https://phabricator.wikimedia.org/T87730#1019127 [21:52:48] RD_: Try https://tools.wmflabs.org/supercount/ [21:53:10] valhallasw`cloud: "proxy maintainer?" :-) [21:53:36] Coren: yeah, something like that. But adding them as maintainer suggests they have an active part in the project /now/, which wouldn't be the case [21:53:49] 3Wikibugs, Wikimedia-Fundraising: Wikibugs bot is skipping many notifications - https://phabricator.wikimedia.org/T88747#1019330 (10awight) 3NEW [21:54:24] valhallasw`cloud: It's not clear to me that this is a useful distinction; the point remains of "someone who can care for the tool" even if all that means is "find someone else to maintain proper" [21:54:53] Coren: I guess. Maybe it's just that the term 'maintainer' suggests more than that [21:55:20] and it's also communicated as that (eg the no webservice message, which basically says 'complain to X, Y, Z or A when this message appears in error') [21:56:02] Well yeah, but you'd expect a caretaker to be able to field that complaint and either fix the tool or find someone who can. [21:56:58] Coren: well, yes/no. there's a difference between someone who can find someone /after you disappear/ and someone who is an active contact for the project [21:57:54] Well, if you can think of a useful way of making that distinction, do tell. Though I suppose any maintainer that leaves a "will" about whom could take over the tool would work. :-) [21:58:39] Coren: yeah, that's why I couldn't find a concise, clear, way to say this on the bug :-p I was hoping discussing it would help with thah (which it did) [22:08:55] legoktm: python3 + io encoding hell again :< [22:09:03] sigh. [22:09:17] valhallasw`cloud: we could just use remove the print() statement, it's only for debugging [22:09:27] legoktm: let's just tell python to use utf8 :-p [22:09:32] okay :D [22:13:43] !log tools.wikibugs valhallasw: Deployed 9054845f4a69a7364f5270e2ada574f696e4f70f Add MoodBar to wikimedia-collaboration wb2-phab [22:13:48] Logged the message, Master [22:14:32] Grrr. [22:14:36] clearly doesn't work [22:15:08] oh I know why [22:16:09] 3Wikibugs, Wikimedia-Fundraising: Wikibugs bot is skipping many notifications - https://phabricator.wikimedia.org/T88747#1019435 (10valhallasw) §! [22:16:13] YEAH! [22:16:18] *victory dance* [22:18:40] :D [22:22:24] (03PS1) 10Merlijn van Deen: Set PYTHONIOENCODING in jsub [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188925 (https://phabricator.wikimedia.org/T88747) [22:25:52] (03CR) 10Legoktm: [C: 032] Set PYTHONIOENCODING in jsub [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188925 (https://phabricator.wikimedia.org/T88747) (owner: 10Merlijn van Deen) [22:26:04] (03Merged) 10jenkins-bot: Set PYTHONIOENCODING in jsub [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188925 (https://phabricator.wikimedia.org/T88747) (owner: 10Merlijn van Deen) [22:26:37] 3Wikibugs, Wikimedia-Fundraising: Wikibugs bot is skipping many notifications - https://phabricator.wikimedia.org/T88747#1019498 (10valhallasw) Unfortunately, this still doesn't fix the issue, as the bug still doesn't show up in the correct channel. The bug itself reports Sprints: § Fundraising Tech Backlog (A... [22:27:17] valhallasw`cloud: should we just have .*Fundraising.* go to their channel? [22:27:45] legoktm: I was just typing that. Please stop reading my mind :D [22:27:59] I was thinking "§ Fundraising.*", but basically the same, yeah [22:28:49] :P [22:29:15] legoktm: also a good test to see if the yaml actually supports non-ascii ;-D [22:29:39] (03PS1) 10Merlijn van Deen: Add § Fundraising.* to #wikimedia-fundraising [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188926 (https://phabricator.wikimedia.org/T88474) [22:30:06] legoktm: I don't want Pywikibot-Fundraising to end up there!!!1111oneoneone ;-) [22:30:23] lolol [22:30:53] ahahahahahhahahahaha [22:30:58] well jenkins likes it [22:30:58] IT'S THE SCREEN SCRAPING [22:31:05] it finally broke [22:31:07] ... [22:31:11] sigh [22:31:49] so the screenscraping made it through longer than using the actual API which has broken on every upgrade? [22:32:10] sounds like the mw api :-p [22:32:53] I can probably just match the tag list items instead [22:35:38] (03CR) 10Legoktm: [C: 032] Add § Fundraising.* to #wikimedia-fundraising [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188926 (https://phabricator.wikimedia.org/T88474) (owner: 10Merlijn van Deen) [22:35:50] (03Merged) 10jenkins-bot: Add § Fundraising.* to #wikimedia-fundraising [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188926 (https://phabricator.wikimedia.org/T88474) (owner: 10Merlijn van Deen) [22:40:53] legoktm: :< open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) [22:40:59] who thought of that horrible interface [22:41:32] heh [22:45:17] now it's going into /dev/null again :< [22:46:01] * valhallasw`cloud prods wikibugs [22:46:30] oh guess what [22:46:34] it can't do utf-8 yaml! [22:47:03] *facepalm* [22:49:28] (03PS1) 10Merlijn van Deen: Assume channel list is utf-8 [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188930 [22:49:54] Python 3 assuming a crazy stdout encoding: check [22:49:59] Python 3 assuming a crazy file encoding: check [22:50:57] (03CR) 10Legoktm: [C: 032] Assume channel list is utf-8 [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188930 (owner: 10Merlijn van Deen) [22:51:04] valhallasw`cloud: why did the jenkins test pass then? [22:51:13] (03Merged) 10jenkins-bot: Assume channel list is utf-8 [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188930 (owner: 10Merlijn van Deen) [22:51:19] legoktm: I don't know? maybe locale is set in jenkins? [22:51:55] !log tools.wikibugs valhallasw: Deployed d9a83a0d71b0dd4500d40ebba5232b2ded362be5 Assume channel list is utf-8 wb2-irc [22:52:00] Logged the message, Master [22:52:17] oh, and now it's going to crash again because we print the channel list, and I didn't qdel it [22:55:43] (03PS1) 10Merlijn van Deen: Fixup so the new jsub actually works [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188934 [22:57:47] valhallasw`cloud: I'm going afk now so you should probably just self-merge stuff so it's not broken [22:58:01] (03CR) 10Merlijn van Deen: [C: 032] Fixup so the new jsub actually works [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188934 (owner: 10Merlijn van Deen) [22:58:14] (03Merged) 10jenkins-bot: Fixup so the new jsub actually works [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188934 (owner: 10Merlijn van Deen) [22:58:36] ok. I'm fixing tests for get_tags, but writing tests is hard as always :< [23:05:48] legoktm: and I hate dictionary views :( [23:06:25] list(next(iter(tags.values()))) instead of tags.values()[0].keys() [23:07:55] (03PS1) 10Merlijn van Deen: Fix project tag screen scraping [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188936 (https://phabricator.wikimedia.org/T88747) [23:08:06] (03CR) 10jenkins-bot: [V: 04-1] Fix project tag screen scraping [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188936 (https://phabricator.wikimedia.org/T88747) (owner: 10Merlijn van Deen) [23:08:38] (03PS2) 10Merlijn van Deen: Fix project tag screen scraping [labs/tools/wikibugs2] - 10https://gerrit.wikimedia.org/r/188936 (https://phabricator.wikimedia.org/T88747)