[10:36:11] (CR) QChris: "> if Camus is not importing into hourly buckets properly, then that is a" [analytics/kraken] - https://gerrit.wikimedia.org/r/121531 (owner: Ottomata) [11:10:51] (CR) QChris: "> I have addressed all comments [...]" (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/120542 (owner: Nuria) [11:11:18] (CR) QChris: [C: -1] Adding coding guidelines to README.md file (3 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/120542 (owner: Nuria) [11:12:11] (CR) QChris: "ping" [analytics/wikistats] - https://gerrit.wikimedia.org/r/100368 (owner: Nemo bis) [11:13:14] qchris: ping to whom? [11:13:27] Nemo_bis: To ezachte to merge it. [11:13:35] oki [11:13:44] I think he has some local code still to be submitted now [11:13:55] he did a lot of wonderful work on parallelisation [11:13:58] Probably. [11:14:01] Yes! [11:14:14] But still ... the change you made looks harmless. [11:14:31] And it's sitting in me �Incoming reviews� for sooooo long already. [11:15:02] :) You could remove yourself [11:15:11] I thin the problem is still that the tests are failing [11:15:16] I want that change merged :-) [11:15:36] jenkins-bot voted V+2. [11:15:48] https://integration.wikimedia.org/ci/job/analytics-wikistats/286/console [11:15:53] says "Finished: SUCCESS" [11:16:12] oooh [11:16:21] I thought I fixed those tests for ezachte some time ago. [11:16:34] If you still find something that's failing ... please let us know. [11:17:12] let's see [11:17:49] qchris: I've read your change that fixed Ezachte's tests [11:18:03] qchris: I liked how you worked around his check for not-older-than-1-year [11:18:39] average: I do no longer remember :-) I only remember that I fixed something on that end. [11:18:45] * qchris is too stupid to remember things. [11:19:31] qchris: is reportcard.wikimedia.org public ? [11:19:56] (CR) Nemo bis: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/118257 (owner: Nemo bis) [11:20:08] matanya: No clue. milimetric would know. [11:20:19] matanya: I know only reportcard.wmflabs.org. [11:20:33] matanya: (which is public) [11:20:41] yes. thanks [11:22:44] (PS6) Nuria: Adding coding guidelines to README.md file [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/120542 [11:23:01] (PS2) Nemo bis: Fix typo [analytics/wikistats] - https://gerrit.wikimedia.org/r/118257 [11:23:22] (CR) jenkins-bot: [V: -1] Fix typo [analytics/wikistats] - https://gerrit.wikimedia.org/r/118257 (owner: Nemo bis) [11:23:55] The first didn't go well :/ [11:24:18] (CR) Nuria: Adding coding guidelines to README.md file (3 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/120542 (owner: Nuria) [11:25:13] Nemo_bis: Oh wikistats is fun :-) [11:26:25] Yes :) [11:26:55] I managed to run it for translatewiki.net dumps and I'm quite happy koti.kapsi.fi/~federico/twnstats/ [11:27:06] http://koti.kapsi.fi/~federico/twnstats/ [11:28:41] (CR) QChris: "recheck" [analytics/wikistats] - https://gerrit.wikimedia.org/r/103535 (owner: Erik Zachte) [11:31:53] hashar: Can I get Jenkins to re-run tests on an already merged commit? [11:32:20] (PS2) Nemo bis: Comment some path tests which overrode standard ones [analytics/wikistats] - https://gerrit.wikimedia.org/r/118261 [11:32:21] (or am I just not patient enough for Jenkins to pick up my "recheck"?) [11:32:34] (PS2) Nemo bis: Remove all trailing whitespace [analytics/wikistats] - https://gerrit.wikimedia.org/r/118266 [11:32:36] (CR) jenkins-bot: [V: -1] Comment some path tests which overrode standard ones [analytics/wikistats] - https://gerrit.wikimedia.org/r/118261 (owner: Nemo bis) [11:32:41] qchris: hmm on a merged commit yeah that might be possible [11:32:42] (PS4) Nemo bis: Typofix in comments [analytics/wikistats] - https://gerrit.wikimedia.org/r/118366 [11:32:46] qchris: which change? [11:32:53] qchris: also recheck only run the lint jobs :-( [11:32:53] (CR) jenkins-bot: [V: -1] Remove all trailing whitespace [analytics/wikistats] - https://gerrit.wikimedia.org/r/118266 (owner: Nemo bis) [11:32:55] hashar: https://gerrit.wikimedia.org/r/#/c/103535/ [11:33:09] (CR) jenkins-bot: [V: -1] Typofix in comments [analytics/wikistats] - https://gerrit.wikimedia.org/r/118366 (owner: Nemo bis) [11:33:11] (PS4) Nemo bis: [Full dump analysis] Reduce edits_only and reverts_only intricacy [analytics/wikistats] - https://gerrit.wikimedia.org/r/118436 [11:33:19] (PS2) Nemo bis: Add perl dependencies to a CollectMailArchives.pl comment [analytics/wikistats] - https://gerrit.wikimedia.org/r/91998 [11:33:26] (CR) jenkins-bot: [V: -1] [Full dump analysis] Reduce edits_only and reverts_only intricacy [analytics/wikistats] - https://gerrit.wikimedia.org/r/118436 (owner: Nemo bis) [11:33:36] Only lint jobs ... mhmm ... is there a way to rerun all tests? [11:33:41] (CR) jenkins-bot: [V: -1] Add perl dependencies to a CollectMailArchives.pl comment [analytics/wikistats] - https://gerrit.wikimedia.org/r/91998 (owner: Nemo bis) [11:33:48] (PS3) Nemo bis: Move download limiter to proper place and comment it as it's only a test [analytics/wikistats] - https://gerrit.wikimedia.org/r/92056 [11:33:57] (PS2) Nemo bis: Archives are downloaded in .txt.gz format: fix matching and opening [analytics/wikistats] - https://gerrit.wikimedia.org/r/92066 [11:33:58] qchris: the job https://integration.wikimedia.org/ci/job/analytics-wikistats/295/console should have all the parameters to replay the same test. You can browse to https://integration.wikimedia.org/ci/job/analytics-wikistats/295/ then assuming you are logged with your labs account, click the [Rebuild] link on the left [11:34:03] (CR) jenkins-bot: [V: -1] Move download limiter to proper place and comment it as it's only a test [analytics/wikistats] - https://gerrit.wikimedia.org/r/92056 (owner: Nemo bis) [11:34:07] qchris: that will show up a bunch of parameters [11:34:18] (CR) jenkins-bot: [V: -1] Archives are downloaded in .txt.gz format: fix matching and opening [analytics/wikistats] - https://gerrit.wikimedia.org/r/92066 (owner: Nemo bis) [11:34:44] With open ones I can edit the commit message to have tests rerun :P [11:34:47] qchris: some of them being the git reference pointing to a merge of that patch with whatever the tip of the branch was at that time [11:35:07] OK. I'll use them to rerun the job. Thanks hashar! [11:35:42] Nemo_bis: You're right. A null commit might be easier :-) [11:36:04] qchris: you can re-trigger the job in jenkins IMHO [11:36:08] qchris: but that only run that patchset which is quite old [11:36:29] hashar: It's still current master, so it should be fine. [11:37:42] qchris: the job is also triggered after a change got merged. So yeah build 295 is "safe" [11:37:44] (PS1) QChris: DO NOT COMMIT. Dummy commit to have Jenkins rerun tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/123595 [11:37:59] or another patch :-] [11:38:02] (CR) jenkins-bot: [V: -1] DO NOT COMMIT. Dummy commit to have Jenkins rerun tests [analytics/wikistats] - https://gerrit.wikimedia.org/r/123595 (owner: QChris) [11:38:11] Yes. /me is lazy. [11:38:18] I do the same [11:38:19] :p [11:38:42] I am only waiting for YuviPanda to come around and fake another "gerrit merged the patch" message :-D [11:38:53] (As he did before) [11:39:15] :) [11:39:20] 100 % failure for me :< [11:39:48] Nemo_bis: the empty commit failed as well. [11:39:54] So it's not your commits. [11:40:23] I'll have a look :-( [11:41:59] :-( [11:43:01] (CR) QChris: "Sorry for the noise." [analytics/wikistats] - https://gerrit.wikimedia.org/r/103535 (owner: Erik Zachte) [11:52:01] Nemo_bis: I'll track it at https://bugzilla.wikimedia.org/show_bug.cgi?id=63478 [12:02:44] (PS1) Stefan.petrea: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 [12:03:17] qchris_away , Nemo_bis [12:03:18] ^^ [12:03:31] "SUCCESS in 24s" [12:11:06] (CR) Hashar: [C: -1] "Missing reference to christian bug report https://bugzilla.wikimedia.org/show_bug.cgi?id=63478" [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 (owner: Stefan.petrea) [12:11:08] :-D [12:11:17] You need in the commit summary: Bug: 63478 [12:11:18] \O/ [12:13:00] (PS2) Stefan.petrea: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 [12:13:40] (PS3) Stefan.petrea: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 [12:15:56] milimetric: yeh I am around [12:16:34] milimetric: ping me whenever your breakfast is done / you are ready :-] [12:17:10] average: ahah [12:17:13] https://git.wikimedia.org/blob/analytics%2Fwikistats.git/43317757a6929ca4d2767be3ba5560184e386dbb/squids%2Fperl%2FSquidCountArchive.pl;jsessionid=gfc9dm1wxk8h2c1l8tqptsey#L322 [12:17:17] that s overlong but hey [12:18:11] (PS4) Hashar: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 (owner: Stefan.petrea) [12:18:45] (CR) Hashar: "Properly set the commit summary header Bug: 63478" [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 (owner: Stefan.petrea) [12:19:24] oh cool, didn't know that about the header [12:20:08] hashar: I should probably take that jsessionid out of the link ? [12:21:48] (PS5) Stefan.petrea: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 [12:22:21] (PS6) Stefan.petrea: Fixed regression test. [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 [12:22:28] average: or just refer to the file and copy paste the relevant code :] [12:22:38] that is not really important though [12:23:52] hashar: yeah, except code might change with time, and that's why I wanted to use a persistent link to one particular commit [12:25:21] hashar: do we have a link shortener ? [12:25:35] then just copy paste the code [12:25:42] and reference the commit sha1 that introduced that line? [12:26:25] hashar: how would I find the commit that introduced line 322 in SquidCountArchive.pl ? git-blame ? [12:26:33] yup [12:26:43] might try: git gui blame [12:26:50] let you easily browse in the blame history [12:28:17] hashar: git-blame erroneously attributes that line to me [12:28:23] hashar: however, I was not the one to introduce it [12:28:45] hashar: but because only a small part of wikistats 's lifetime was tracked by git, I appear as being the one who modified it [12:28:52] in commit 52c0dc0d76d37d355a8911ec2c7a87594ffbc9ab [12:29:00] where I changed a lot of the tabs/spaces [12:29:08] by mistake [12:29:36] so I cannot find the commit that introduced that line (322 in SquidArchiveCount.pl) [12:29:52] average: git blame report the last commit that altered the line [12:29:57] but you can browser the history further [12:30:14] ie: git blame 52c0dc0d76d37d355a8911ec2c7a87594ffbc9ab^ ...SquidCountArchive.pl [12:30:27] with ^ meaning the commit before 52c0dc0d [12:30:27] user@xw:~/wikistats/analytics-wikistats/squids$ git blame -L 322,322 perl/SquidCountArchive.pl [12:30:30] 52c0dc0d (Petrea Corneliu Stefan 2012-10-24 21:40:53 +0300 322) { abort "$desc '$date' should be a year or less ago (but before today: '$date_today [12:30:33] or git gui blame [12:31:08] git blame -L 322,322 52c0dc0d^ perl/SquidCountArchive.pl [12:31:12] that might give you what you want [12:31:28] user@xw:~/wikistats/analytics-wikistats/squids$ git blame -L 322,322 52c0dc0d^ perl/SquidCountArchive.pl [12:31:31] fac1e0bf squids/scripts/SquidCountArchive.pl (Erik Zachte 2012-10-13 15:29:43 +0000 322) $file_csv_opsys = "public/SquidDataOpSys.csv" ; [12:31:34] ah yes [12:31:36] hmm, ok [12:31:43] I learned about git-blame today [12:31:58] also try git gui blame [12:32:17] and you can pass it -w to ignore whitespace changes [12:39:43] hashar: how can I tell it to give me all the commits that modified that line ? [12:39:48] it seems to block at the first one [12:40:12] and .. or .. or ^ need to be specified to get the other ones.. [12:42:58] average: no idea :-( [13:38:23] (PS1) Milimetric: Testing Lint, will abandon [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/123636 [13:43:20] (CR) jenkins-bot: [V: -1] Testing Lint, will abandon [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/123636 (owner: Milimetric) [13:44:32] :-( [13:44:33] https://integration.wikimedia.org/ci/job/analytics-limn-mobile-data-tox-flake8/1/ [13:44:49] ./generate.py:21:1: E302 expected 2 blank lines, found 1 [13:44:49] ./generate.py:313:121: E501 line too long (143 > 120 characters) [13:45:23] milimetric: congratulations! [13:46:12] (PS2) Milimetric: Testing Lint, will abandon [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/123636 [13:47:40] (PS3) Milimetric: Pass new linting job in jenkins [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/123636 [13:48:47] (CR) Milimetric: [C: 2] "This repository is now verified by jenkins (just linting for now)" [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/123636 (owner: Milimetric) [13:53:23] hey ottomata , stuff's looking good in mediawiki-vagrant [13:53:30] my CPU fan goes nuts, but it's working [13:53:48] oh wait, OpenJDK just said [13:53:49] # There is insufficient memory for the Java Runtime Environment to continue. [13:54:00] hah [13:54:01] moment, while I give it more memory [13:54:05] MORE MEMORRYYY [13:55:22] * average is thinking that switching/forking the mediawiki-vagrant to use the LXC provider would be a solution to this, as the overhead of virtualization is much thiner there.. [14:00:52] ottomata: qchris: nuria: ottomata: csalvia_: standup is in the google cave [14:00:58] Nooooooo! [14:01:01] other cave is broken man [14:01:08] I can't get into it at all now [14:01:14] ok. coming. [14:01:19] I really really tried, too many pain points [14:01:26] milimetric: I've contacted them, basically they acknowledged that it has problems [14:01:41] they recommend chrome for "the best experience" [15:01:47] (Abandoned) Milimetric: Add concatenated recurrent reports [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/119693 (owner: Csalvia) [15:31:58] (PS3) Ottomata: Adding camus wrapper script and camus properties file [analytics/kraken] - https://gerrit.wikimedia.org/r/121531 [15:33:02] (CR) Ottomata: "Ok, I totally forgot the configs that told Camus how to bucket our stuff properly! I just added in patchset 3." [analytics/kraken] - https://gerrit.wikimedia.org/r/121531 (owner: Ottomata) [15:59:19] hi halfak, qchris, Ironholds, :) [15:59:22] yall there? [15:59:31] here [15:59:37] meeting :-( [15:59:45] so, stat1003.wikimedia.org is up and running, with all data and homedirs copied over [15:59:52] just about to start my standup with growth. [15:59:57] ooo [15:59:59] i just copied homedirs, copied /a -> /srv (with symlink) yesterday [16:00:11] was hoping you 3 could check it out and make sure stuff was ok [16:00:23] sure. could you drop an email with conn. details? [16:00:27] gotta run! [16:00:29] i want to send an email today encouraging everyone to start using stat1003 [16:00:31] ok, um [16:00:35] same as stat1 [16:00:40] but stat1003.wikimedia.org [16:00:45] everything else is the same [16:00:50] will check it out asap [16:00:58] k danke [16:02:04] ottomata: Ok. I'll test today. [18:27:28] halfak: you in meeting still? :) [19:40:42] Hey ottomata, sorry. Just finished meetings. [19:41:14] setting up ssh config [19:41:23] cool, so, q for you: [19:41:28] do you host anything on stat1's public IP anymore? [19:41:34] i remember you used to host snuggle there, right? [19:41:34] negative [19:41:38] yup [19:41:43] not anymore though [19:42:39] was able to connect to stat1003. Should I pull my own homedir stuff? [19:45:02] it should be there [19:45:03] is it? [19:45:27] it is not! [19:45:29] :? [19:45:35] HM [19:45:39] let me do that [19:46:45] OK. Should I not change anything in my homedir? I sort of have processes writing to files right now. [19:46:55] Some of the files are very large. [19:46:55] i'm rsyncing it over [19:46:59] this should have already been done [19:47:02] oh, on stat1 you mean? [19:47:06] yup [19:47:18] whoa ./data is 199G [19:47:29] you should probably put that in /srv (previously /a) man! [19:47:30] :p [19:47:45] hm [19:48:00] Na. It's nice to have it nearby and not deal with symlinks. What do we have all that disk for? [19:48:32] symlinks? [19:48:59] the only reason there is a large homedir on stat1 [19:49:03] is because / was small [19:49:09] I work with datasets. [19:49:12] stat1003 does not have a separate /home partition [19:49:15] I think that's reason enough. [19:49:15] symlinks are so cheap that they're really cheap [19:49:20] so if you fill up your /home, you will fill up / [19:49:35] why not just work in /srv/halfkak/data or something then? [19:49:38] and do everything there [19:49:40] you can create dirs on /srv [19:49:42] in [19:49:43] Why have a homedir? [19:49:44] (aka /a) [19:50:01] It's not like I'm filling up the disk [19:50:21] i guess so, i'm not going to stop you :p [19:50:40] but be super careful with that on stat1003, since /home is not a separate partition there [19:51:26] Is /srv an NFS mount? [19:51:30] no [19:51:43] a large RAID 5 lvm partition across 4 drives [19:52:03] Just a copy from stat1:/a? [19:52:07] that is done already [19:52:12] it seems some home dirs didn't finsih [19:52:14] i'm copying your /home now [19:52:16] So it is a copy [19:52:19] yes [19:52:27] stat1 is going to go away soon [19:52:36] stat1003 is the replacement [19:52:43] *sigh* OK, this is disruptive. [19:52:53] stat1 is in tampa [19:52:59] the whole datacenter is going away [19:53:22] I understand, but working on this right now is interrupting other work that I promised. I realize that it is important. [19:54:19] It seems like I'm going to need to stop all work on stat1 (I have a lot of files being written), re-rsync, and then start working from stat1003. [19:54:38] halfak: did you see the email on March 25th titled 'stat1 account audit'? [19:55:05] Yup [19:55:51] Not sure how that's relevant. I don't seem myself in the list of accounts to be dropped. [19:56:00] its relevant because it says [19:56:12] We will soon be migrating everything on stat1 over to a new server in eqiad: stat1003 [19:56:31] your name isn't on the list, yup that's why your homedir is supposed to be migrated [19:56:35] (doing that now) [19:56:55] Sure. "migrating" might be physically moving the hard drive to a new box. Also, I received no warning that I'd need to schedule a few hours for this today. [19:57:03] you don't have to do it today [19:57:08] stat1 is still functional [19:57:11] i'm getting stat1003 ready [19:57:19] I'm honestly unsure if I should continue with the work that I had planned or wait for the rsync to minimize confusion and loss. [19:57:23] and will send an email out soon about impending doom of stat1 [19:57:27] naw, keep going [19:57:28] you're ok [19:57:34] if you want I can wait to rsync over your homedir [19:57:42] its rsync, so I can do it multiple times :p [19:57:50] :/ I guess we'll just copy the big data files again. [19:58:03] yeah that's fine, i'm going to go ahead and stop the rsync of your homedir [19:58:08] and we can copy it when you are good and ready [19:58:12] Meh... most of the ones I'm writing now are small. [19:58:19] ok, [19:58:26] OK. That's fine. How about we schedule the copy. Would that work? [19:58:30] sure [19:58:33] early next week ok? [19:58:38] or tomorrow even? [19:58:47] I think I can manage tomorrow. [19:58:50] ok cool [19:59:29] Is the rsync for /a/public-datasets moved to stat1003? [19:59:46] yes, shoudl be [20:00:10] OK. So, I guess I'll pull over the files I dumped in there this morning. [20:00:11] its still running on stat1 though [20:00:11] so right now [20:00:22] no its ok, stat1 is still 100% functional [20:00:31] and I had planned on still doing another rsync from /a [20:00:35] i'm just getting stat1003 ready [20:00:36] OK [20:00:44] and want to make sure things are ok witih you main users of it that I know of [20:00:50] before I asked for more people to get ready to move [20:01:14] stat1003 is just puppetized the same way as stat1 is [20:01:22] so there is an rsync job for public-datasets on stat1003 too [20:01:28] but enither of those jobs do —delete [20:01:32] so it shouldn't make adifference [20:02:02] OH [20:02:16] I am totally wrong, I am looking at the limn-public-data [20:02:18] i am sorry halfak [20:02:25] the rsync pulls from stat1001 [20:02:25] hm [20:02:27] ok [20:02:30] i will revert that for now [20:02:33] yes? [20:02:42] i thought stat1003 and stat1 would be pushing [20:02:43] but, it isn't, stat1001 is pulling [20:02:47] I'm confused what you are asking. [20:02:50] and the hostname chagned in puppet [20:02:56] ok, sorry [20:03:09] when you just asked about the public-datasets rsync, i said it was still working [20:03:21] but i just double checked, i was looking at the wrong rsync job when I answered that question [20:03:27] stat1001 has the rsync job [20:03:32] so it pulls [20:03:36] and right now it is pulling from stat1003 [20:03:48] you just said you have work on stat1 that needed to be rsynced [20:03:52] so, it probably hasn't been yet [20:04:17] if you would prefer, I can revert that stat1003 change, so that stat1001 still pulls from stat1 as usual [20:05:51] I'm fine with whatever configuration. [20:06:02] I can start dumping public datasets onto stat1003. [20:06:40] I just want to make sure they make it to stat1001. [20:06:45] yeah? if you are ok with that then I will leave it, I really don't mind reverting [20:06:52] how about you put whaever you want on stat1003 now [20:06:55] /srv/public-datasets [20:06:55] Copying data now. [20:07:02] and I will manually run the rsync and we can double check [20:08:43] bah. Looks like I messed up my agent forwarding [20:08:52] Would appreciate the rsync [20:08:58] While I work this out. [20:11:22] ok, I will rsync stat1:/a/public-datasets to stat1003:/srv/public-datasets [20:11:24] s'ok? [20:11:31] yup [20:11:35] (btw, there is a symlink on stat1003:/a -> /srv, to make thigns easier) [20:11:35] thanks [20:11:50] saw that. Accidentally made use of it the first time :) [20:12:19] cool :) [20:18:14] halfak: stat1001 pull from stat1003 looked like it worked [20:18:20] want to check if the files are up where you expect? [20:18:47] Yup. They're there. Thanks. [20:19:03] cool [20:19:21] Now if there were only a way that I could kick stat1001's rsync. [20:20:59] oh I did that [20:21:01] right? [20:21:04] that's what I meant [20:21:53] Oh woops. I misunderstood. [20:22:18] Yup. That worked as expected. [20:22:35] cool [20:35:52] (CR) QChris: [C: -1] "Nits about unused variables." (4 comments) [analytics/wikistats] - https://gerrit.wikimedia.org/r/123603 (owner: Stefan.petrea) [20:44:27] ottomata: How to verify the SSH Fingerprint for stat1003? [20:44:30] Would you mind creating [20:44:32] https://wikitech.wikimedia.org/wiki/Help:SSH_Fingerprints/stat1003.wikimedia.org [20:44:32] ? [20:44:38] hey [20:44:39] qhcirs [20:44:46] i messed up stat1003 a secodn ago while rsyncing some homedirs [20:44:48] sorry [20:44:49] i'm fixing now [20:45:02] Soooo I'll try tomorrow? Ok. [20:45:21] No problem. [20:59:31] qchris: https://wikitech.wikimedia.org/wiki/Help:SSH_Fingerprints/stat1003.wikimedia.org [20:59:52] ottomata: Thanks! [21:17:22] (CR) QChris: [C: -1] "Mostly comments from older PSs that got ignored :-(" (7 comments) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/120542 (owner: Nuria) [21:29:21] ottomata: Is it ok if I just remove the geowiki cronjobs for the stats user on stat1 by hand, or should I puppetize that? [21:30:53] you can remove them by hand if you like, puppet is disabled on stat1 for now [21:31:03] if we run puppet there, they will be put back in place by puppet [21:31:07] ok. Then I'll remove them by hand.