[00:12:28] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 120 minutes) Saramg (waiting 115 minutes) Joelp (waiting 112 minutes) Dp (waiting 111 minutes) [00:12:29] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 86 minutes) [00:26:00] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 134 minutes) Saramg (waiting 129 minutes) Joelp (waiting 126 minutes) Dp (waiting 125 minutes) [00:26:01] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 99 minutes) [00:39:28] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 147 minutes) Saramg (waiting 142 minutes) Joelp (waiting 139 minutes) Dp (waiting 138 minutes) [00:39:29] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 113 minutes) [00:52:56] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 161 minutes) Saramg (waiting 156 minutes) Joelp (waiting 153 minutes) Dp (waiting 152 minutes) [00:52:57] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 126 minutes) [00:59:41] What's the internal route to http://huggle.wmflabs.org/data/wl.php?action=read&wp=en ? [01:06:28] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 174 minutes) Saramg (waiting 169 minutes) Joelp (waiting 166 minutes) Dp (waiting 165 minutes) [01:06:29] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 140 minutes) [01:16:28] <^demon> When someone gets a chance to poke those 4 users waiting for shell, that'd be great. I've love to respond to an e-mail with instructions for peeps. [01:18:46] ^demon: what would they be doing wrong to trigger that? [01:19:30] <^demon> Not a clue. I just found out what their 4 usernames were, and saw they didn't have loginviashell yet. [01:19:36] <^demon> And then saw the bot pestering here. [01:19:57] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 188 minutes) Saramg (waiting 183 minutes) Joelp (waiting 180 minutes) Dp (waiting 179 minutes) [01:19:58] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 153 minutes) [01:21:08] Jasper_Deng: Do you know how to access *.wmflabs.org from inside? [01:21:22] a930913: I heard you use a SOCKS proxy for that, over an SSH session. [01:21:27] * Jasper_Deng doesn't know the complete details [01:21:50] Jasper_Deng: That would only be if I had access to *. [01:24:34] I could ssh tunnel out of labs, but that would be ugly and prone to breaking. [01:33:18] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 201 minutes) Saramg (waiting 196 minutes) Joelp (waiting 193 minutes) Dp (waiting 192 minutes) [01:33:19] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 167 minutes) [01:46:39] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 215 minutes) Saramg (waiting 209 minutes) Joelp (waiting 206 minutes) Dp (waiting 205 minutes) [01:46:40] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 180 minutes) [01:52:42] a930913: You can access tools.wmflabs.org at tools-webproxy, but I don't think there's a general scheme. [02:00:04] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 228 minutes) Saramg (waiting 223 minutes) Joelp (waiting 220 minutes) Dp (waiting 219 minutes) [02:00:05] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 193 minutes) [02:13:24] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 241 minutes) Saramg (waiting 236 minutes) Joelp (waiting 233 minutes) Dp (waiting 232 minutes) [02:13:25] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 207 minutes) [02:26:45] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 255 minutes) Saramg (waiting 250 minutes) Joelp (waiting 247 minutes) Dp (waiting 246 minutes) [02:26:46] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 220 minutes) [02:40:09] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 268 minutes) Saramg (waiting 263 minutes) Joelp (waiting 260 minutes) Dp (waiting 259 minutes) [02:40:10] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 233 minutes) [02:53:34] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 281 minutes) Saramg (waiting 276 minutes) Joelp (waiting 273 minutes) Dp (waiting 272 minutes) [02:53:35] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 247 minutes) [03:06:55] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 295 minutes) Saramg (waiting 290 minutes) Joelp (waiting 287 minutes) Dp (waiting 286 minutes) [03:06:56] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 260 minutes) [03:20:20] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 308 minutes) Saramg (waiting 303 minutes) Joelp (waiting 300 minutes) Dp (waiting 299 minutes) [03:20:21] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 274 minutes) [03:33:48] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 322 minutes) Saramg (waiting 317 minutes) Joelp (waiting 314 minutes) Dp (waiting 313 minutes) [03:33:49] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 287 minutes) [03:47:16] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 335 minutes) Saramg (waiting 330 minutes) Joelp (waiting 327 minutes) Dp (waiting 326 minutes) [03:47:17] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 301 minutes) [04:00:44] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 349 minutes) Saramg (waiting 344 minutes) Joelp (waiting 341 minutes) Dp (waiting 340 minutes) [04:00:45] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 314 minutes) [04:14:12] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 362 minutes) Saramg (waiting 357 minutes) Joelp (waiting 354 minutes) Dp (waiting 353 minutes) [04:14:13] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 328 minutes) [04:27:36] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 376 minutes) Saramg (waiting 370 minutes) Joelp (waiting 367 minutes) Dp (waiting 366 minutes) [04:27:37] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 341 minutes) [04:41:01] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 389 minutes) Saramg (waiting 384 minutes) Joelp (waiting 381 minutes) Dp (waiting 380 minutes) [04:41:02] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 354 minutes) [04:54:26] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 402 minutes) Saramg (waiting 397 minutes) Joelp (waiting 394 minutes) Dp (waiting 393 minutes) [04:54:27] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 368 minutes) [05:07:53] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 416 minutes) Saramg (waiting 411 minutes) Joelp (waiting 408 minutes) Dp (waiting 407 minutes) [05:07:54] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 381 minutes) [05:21:17] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 429 minutes) Saramg (waiting 424 minutes) Joelp (waiting 421 minutes) Dp (waiting 420 minutes) [05:21:18] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 395 minutes) [05:34:38] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 443 minutes) Saramg (waiting 437 minutes) Joelp (waiting 434 minutes) Dp (waiting 433 minutes) [05:34:39] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 408 minutes) [05:48:04] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 456 minutes) Saramg (waiting 451 minutes) Joelp (waiting 448 minutes) Dp (waiting 447 minutes) [05:48:05] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 421 minutes) [06:01:32] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 469 minutes) Saramg (waiting 464 minutes) Joelp (waiting 461 minutes) Dp (waiting 460 minutes) [06:01:33] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 435 minutes) [06:08:16] !log test [06:08:16] Message missing. Nothing logged. [06:08:33] has labs-morebots been stable? [06:08:38] or has it required manual restarts? [06:15:00] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 483 minutes) Saramg (waiting 478 minutes) Joelp (waiting 475 minutes) Dp (waiting 474 minutes) [06:15:01] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 448 minutes) [06:28:20] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 496 minutes) Saramg (waiting 491 minutes) Joelp (waiting 488 minutes) Dp (waiting 487 minutes) [06:28:21] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 462 minutes) [06:41:41] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 510 minutes) Saramg (waiting 505 minutes) Joelp (waiting 501 minutes) Dp (waiting 500 minutes) [06:41:42] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 475 minutes) [06:55:06] Warning: There are 4 users waiting for shell, displaying last 4: Ptarjan (waiting 523 minutes) Saramg (waiting 518 minutes) Joelp (waiting 515 minutes) Dp (waiting 514 minutes) [06:55:07] Warning: There is 1 user waiting for access to tools project: Danilo (waiting 488 minutes) [07:02:37] !tr Danilo [07:02:37] request page: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Danilo?action=edit talk page: https://wikitech.wikimedia.org/wiki/User_talk:Danilo?action=edit§ion=new&preload=Template:ToolsGranted link: https://wikitech.wikimedia.org/w/index.php?title=Special:NovaProject&action=addmember&projectname=tools [07:04:01] !log tools petrb: installing maven on -dev [07:04:03] Logged the message, Master [07:08:27] Warning: There are 2 users waiting for shell, displaying last 2: Joelp (waiting 528 minutes) Dp (waiting 527 minutes) [07:43:00] Coren ping [07:50:04] @search gerrit [07:50:05] Results (Found 6): gerrit, whitespace, git-puppet, gerritsearch, ryanland, gitweb, [07:50:14] !git-puppet [07:50:14] git clone ssh://gerrit.wikimedia.org:29418/operations/puppet.git [07:50:21] <3 wm-bot [07:50:40] <3... [08:39:06] !log tools petrb: started toolwatcher [08:39:08] Logged the message, Master [09:00:33] !log tools petrb: removing wd-terminator service [09:00:39] Logged the message, Master [09:02:54] * addshore slaps petan to attention [09:03:01] hey [09:03:05] what's up [09:03:05] pm ;p [09:04:09] petan: thanks a lot. also a full jdk (that contains javac) would be great (in fact needed with maven) [09:05:14] I must fill a separate bug for it? [09:07:51] (also I prefer to use openjdk7 instead but that would not problem) [09:14:18] addshore: pong [09:14:22] ping [09:14:32] addshore: the basic runner for dumpscan is working [09:14:43] check ~/src/python [09:14:49] ~/src/dumpscan/python [09:14:49] even [09:16:30] * addshore goes to look [09:16:32] I get an HTTP 500 for a little php script on tool labs... how can I see the stack trace? [09:17:03] lbenedix: ~/php.err, or something like that [09:17:09] or ~/error.log? [09:17:14] a file in your home folder, anyway [09:17:42] thx [09:18:44] ebraminio hey [09:18:50] ebraminio you don't need to fill in a bug :P [09:18:57] just tell me what you are missing [09:21:18] valhallasw: nice :> [09:21:55] petan: openjdk-7-jdk would be fine! [09:22:09] lbenedix you need to change owner to tool [09:23:01] valhallasw: where is the git repo? :> [09:23:43] petan: what do you mean? [09:24:22] lbenedix that php script [09:24:56] its local-lbenedix:local-lbenedix now [09:25:19] ok, then you should be able to see error in phperror log [09:25:22] in home [09:28:54] !log tools petrb: installed openjdk7 on -dev [09:28:56] Logged the message, Master [09:29:05] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 523800 bytes) [09:29:17] 523800 < 134217728 [09:29:23] lbenedix your script eats too much ram hmm? :P [09:29:31] tried to allocate +523800 [09:29:38] that is not a total memory usage [09:29:46] that is how much it tried to allocate when it got rejected [09:30:00] crazy... [09:30:12] mmmmh [09:30:17] lbenedix even mediawiki works on this configuration and that is memory expensive :P [09:30:49] the script should get an image from a given url and return a dataURI [09:31:16] !log tools petrb: houston we have problem, / on dev is 94% [09:31:18] Logged the message, Master [09:31:55] lbenedix hmm :/ will need to wait for Coren I guess, I don't want to fiddle with php configuration without knowing the details [09:32:13] Okay [09:32:37] why in the ******* is default rootfs 4gb only [09:32:51] * petan slaps Ryan [09:38:04] petan: thanks a lot! works great! :) [09:38:04] next question how to run queries on the database from python code on tool labs? [09:39:51] addshore will tell u :P [09:40:37] * addshore looks? [09:40:58] still xD [09:41:09] I'm not a python person [09:41:18] !log tools petrb: moved /var/log to separate volume on -dev [09:41:20] Logged the message, Master [09:41:29] oh true [09:41:33] damn, that legoktm [09:41:34] :P [09:42:06] lbenedix: MySQLdb [09:42:15] addshore: just local for now [09:42:25] valhallasw: kk :) [09:42:27] addshore: but feel free to add a github remote and push it there [09:42:39] I may do at some point :) [09:43:07] so what script do I need to pass the id number? :) [09:43:14] jobrunner [09:43:23] dumpparser is now used as library [09:43:40] * addshore sees nothing called jobrunner ;p [09:44:12] parserjob [09:44:14] whatever :p [09:44:22] something something job [09:44:42] python parserjob.py 2 ? [09:44:45] basically [09:44:51] * addshore goes to try :D [09:44:54] mehehee [09:44:55] but it doesn't use all json properties yet [09:45:01] kk [09:45:10] just namespace and 'title contains' for now [09:52:04] !log tools petrb: moving /usr to separate volume expect problems :o [09:52:06] Logged the message, Master [09:52:10] petan: can you install nano on tools-dev? :> [09:52:12] !log tools petrb: on -dev [09:52:14] Logged the message, Master [09:52:33] addshore done [09:52:37] cheers [09:54:51] valhallasw: how am I meant to be passing the id? :P [10:05:13] Upon login to tools-login.wmflabs.org I get a "Did you know that you can find a complete development environment on tools-dev server?". Is tools-login the same thing as tools-dev? [10:05:36] no [10:05:40] it's a different server [10:05:51] ssh tools-dev [10:05:58] to get there [10:06:00] ;) [10:12:13] addshore: as command line argument [10:12:28] python parserjob.py 2 [10:12:37] haha, valhallasw just spotted you had job.run() commented out at the bottom xD [10:13:53] title scanning seems to work awesomllyyy! :D [10:32:29] petan: thanks for the sbt work, I marked the bug as resolved-fixed <3 [10:34:15] Hello y'all, I'm experiencing some anomalous behavior connected to: http://en.wikipedia.org/wiki/Template:GL_Photography_reply. It displays fine in the WikEd preview but merely flashes and disappears on actual pages. [10:34:36] Anyone care to speculate? [10:43:38] hi [10:43:45] martijnHH it works only on -dev [10:44:08] Kevjonesin you should ask in #wikipedia-en [10:46:45] valhallasw: https://github.com/addshore/dumpscan :> [10:47:14] valhallasw: :> https://github.com/addshore/dumpscan [10:50:03] oh my client is going so laggy [10:50:59] whats your github username? :> [10:51:04] Was looking for above average tech skillz. #wikipedia-tech offered some insight. [10:51:15] Tanxs anyway. [10:53:55] addshore: valhallasw [10:53:59] big surprise ;-) [10:54:12] :> [10:54:19] awesome, added you to the repo [10:54:25] thx [10:54:29] gonna add some of my thoughts to the issue list ;p [10:54:37] sure [10:57:26] Would y'all be the folks to talk to about automating (scripting?) some of the functions of http://en.wikipedia.org/wiki/Wikipedia:Graphics_Lab/Photography_workshop/Eight_Requests ? [10:58:44] hmm [11:03:01] I'm envisioning crowd sourcing suggestions by having users note image files in need of work on a page with data fields which would feed into a database for a bot to pull from to automatically update the display after automatically archiving "done" requests. [11:04:23] Crowd source rather than pull from categories at random so as to prioritize in use files and avoid files that have already been fixed but still have old maintenace tags. [11:06:13] heh petan you know what I was saying at the hackathon about huggle just dieing? :P [11:06:14] well https://en.wikipedia.org/wiki/Wikipedia:Huggle/Feedback#Huggle_hanging_and_displaying_error_message xD [11:06:25] meh [11:11:08] is huggle on github? [11:11:19] yup [11:11:28] https://github.com/huggle/huggle [11:11:34] thanks [11:12:10] current version is in /huggle I think [11:14:51] the fix is "buy more memory" ;) [11:16:22] HAHA [11:55:14] wow, thats hillarious [11:55:24] ? [11:55:25] huggle just seems to eat and eat and eat memory now xD [11:55:31] wth [11:55:34] it can start with say 250mb [11:55:41] withing 10 seconds be at a 1GB [11:55:46] within 20 be at 2Gb [11:55:50] and then just break everything xD [11:55:53] weird [11:56:06] I think it's related to IE memory leak but howcome it started now? [11:56:13] no idea :P [11:56:22] * addshore digs deeper [12:55:23] petan: still around? [12:55:30] no [12:55:50] :P [12:55:56] any idea what line 57 here is actually setting? https://github.com/huggle/huggle/blob/master/huggle/Misc.vb [12:56:03] GlExcess [12:56:40] just making sure something doesnt loop too much? GlobalLoopExcess? :P [12:57:05] Thats the problem :P [12:59:03] setting it to 10 seems to work quite well [13:06:22] yes [13:06:40] addshore that is a check which prevent some dangerous loops I discovered from looping endlessly [13:06:52] because there are some :P [13:06:54] xD [13:06:56] not that I wrote them [13:07:02] it's original gurch work [13:07:16] Well, they seem to suddely get worse so I'll reduce the limit to 10 and do another release ;p [13:07:32] fixes the memory problem for now :P [13:15:50] addshore ok but keep in mind that reducing these to low values may affect the functionality, so make sure to test if it works [13:16:12] I did :) as far as I can tell the limit is only used on 2 functions in Misc [13:16:30] mhm [13:16:43] you should like keep reverting vandals for at least 30 minutes to be sure :P [13:16:51] I will :p [13:17:15] oh actually its used in 59 loops [13:17:38] I will definatly have to do a bit more tested ;p [13:18:19] OR [13:18:32] you can make a beta release and ping all these people listed as beta testers :D [13:18:36] there is like 40 of them [13:21:36] petan: maybe :P [13:21:48] im gonna see how huggle works with wikidat first ;p [13:59:22] valhalla1w: ty [14:03:37] does the database schema graphic indicate which columns are indexed? [14:14:17] Coren: https://bugzilla.wikimedia.org/show_bug.cgi?id=48910 unauthenticated lists of users? [14:15:18] Ryan_Lane: ... that was confusingly worded. Getting the list of users without privileged credentials is hitting the LDAP search limit is what I meant. Andrew is working on a paged search version that should circumvent. [14:15:34] ah [14:15:41] yeah. we have a shitton of entries now [14:15:52] I had increased the limit fairly high recently too [14:15:58] I have a workaround now by changing the base DN to be project-by-project so that it gets fewer entries, but that just delays the problem. [14:16:04] How recently? [14:16:13] a month or so ago? [14:16:31] Heh. Atm, bare 'ldaplist' hits the limit again. :-) [14:16:35] yep [14:16:49] that's a *lot* of entries [14:17:05] paging solves all :D [14:17:50] we're going to need to switch to the better method of uid/gid generation you suggested soon, too [14:18:31] Well, we're using all-linux, so we *could* switch to 64 bits [ug]ids. :-) [14:18:41] ah. wikitech-interface component. good idea [14:18:52] Warning: There is 1 user waiting for shell: Justincheng12345 (waiting 0 minutes) [14:18:53] ewww [14:19:09] we have a really long time before that's necessary ;) [14:19:28] 32 would be good though. [14:19:54] IIRC, there haven't been issues with 32 bit [ug]ids since 2008 [14:20:06] * Ryan_Lane nods [14:20:24] I'm sure grid engine will find a way! [14:20:26] :) [14:20:56] It's actually fairly simple to test; just need a single user with uid >64k to test it [14:21:33] yep [14:21:52] I think we're pretty far from hitting that uid range, but it's worth testing [14:22:49] so… we really need a generic labs proxy that does https [14:23:27] that can have *.wmflabs.org, *.tools.wmflabs.org and maybe some other certs [14:23:35] well, cert names [14:25:32] because I very badly don't want projects embedding certs [14:26:15] self-service load balancer/proxy creation would be nice to go with it [14:30:10] hi, Ryan_Lane :) what should i do to be able to reach the sql hosts (enwiki.labsdb etc) from the new vm? [14:30:39] JohannesK_WMDE: sudo su - local-catgraph [14:31:22] Ryan_Lane: local-catgraph@sylvester:~$ mysql -h enwiki.labsdb [14:31:23] ERROR 2005 (HY000): Unknown MySQL server host 'enwiki.labsdb' (0) [14:31:39] I think you have to use a different defaults file [14:31:58] -login and -dev have the host names in /etc/hosts. i tried using the ip addresses from there, but 'network is unreachable' [14:32:11] ah. right [14:32:14] Well, tools.wmflabs.org has a cert, but it's a no-enduser box. [14:32:14] Coren: ^^ heh [14:32:22] Warning: There is 1 user waiting for shell: Justincheng12345 (waiting 13 minutes) [14:32:50] Coren: yeah, still not a huge fan of putting the certs there, though. I'd rather have it somewhere that isn't accessible to anyone but ops [14:32:50] JohannesK_WMDE: It's a ugly hack atm. (Ryan_Lane: we need to fix that soon) [14:32:50] yep :) [14:33:03] JohannesK_WMDE: Look on tools at /etc/iptables.conf [14:33:21] ok, will do Coren [14:33:22] Coren: we can put them in LDAP [14:33:23] JohannesK_WMDE: Then shudder in horror. :-) [14:33:29] Ryan_Lane: The certs? [14:33:36] the dns entries ;) [14:34:00] uh. why do you do it this way? :) [14:34:03] it's enwiki.labsdb.pmtpa.wmflabs, right? [14:34:19] JohannesK_WMDE: because they are all on a single host and have different ports [14:34:23] Ryan_Lane: Ah! Didn't you want to wait until we had a cleaner automated way of doing it though? I'm thinking pull from mediawiki-config/*.dblist? [14:34:58] Ryan_Lane: But you said you didn't want a new mechanism to inject names. [14:34:59] hm [14:35:19] well, it depends on whether we want this in production or labs dns [14:35:24] production is likely better [14:35:35] petan: Coren: How can I download the huggle whitelist from inside the labs? [14:35:39] for that we really want to wait till faidon is done with the new DNS server [14:35:46] Is it? By definition the labsdb is... for the labs. :-) [14:36:02] yeah, but the actual servers are in production [14:36:10] a930913: Where does it live? [14:36:10] we have weird splits of services [14:36:37] Coren: heh. I see an issue with the naming convension you chose [14:36:38] Coren: huggle.wmflabs.org [14:36:38] Ryan_Lane: I guess the question is, does production care about the aliases? [14:36:44] convention [14:36:44] Ryan_Lane: What? [14:36:49] mrwiki.labsdb [14:37:02] how to choose between pmtpa and eqiad? [14:37:10] a930913 why would you want to do that? [14:37:17] There is just the one labsdb, isn't it? [14:37:19] a930913 just use huggle-wiki instance [14:37:34] petan: Hmm? [14:37:37] Coren: no. per datacenter [14:37:52] Ryan_Lane: Ah. [14:38:07] Well, that changes nothing then. *.labsdb.* works just as well [14:38:19] *.labsdb.*? [14:38:32] enwiki.labsdb.pmtpa.wmflabs [14:38:36] yeah [14:38:39] that means it needs to be in LDAP [14:38:40] enwiki.labsdb.eqiad.wmflabs [14:38:41] um... to access the replicated dbs, should i just copy /etc/iptables.conf for now, Ryan_Lane? what do you suggest? [14:38:56] yeah. that's likely easiest for now [14:39:01] JohannesK_WMDE: For now, copy the iptables.conf is the easiest thing to do. [14:39:10] ok. thx [14:39:12] we'll be solving all the issues you're running into soon :) [14:39:15] JohannesK_WMDE: Once the "real" system is in place, it just becomes obsolete/ [14:39:24] welcome to being the first to use this stuff outside of tools ;) [14:39:39] hehe ok. :) [14:39:48] horray for guinea pigs! [14:40:18] (I used 192.* on purpose, so that once the "real" IPs are in place, just chaninging name resolution will work) [14:40:30] * Ryan_Lane nods [14:40:48] is it one ip per slice? [14:41:50] * Coren nods. [14:42:16] 192.168.99.n where n is the slice [14:43:16] wtf? http://www.mediawiki.org/wiki/Extension:TwoFactorAuthentication [14:43:25] why was a fork needed? [14:44:41] ugh. and the license was changed to GPL3 or later? [14:44:45] * Ryan_Lane grumbles [14:44:48] what an asshole [14:45:57] Warning: There is 1 user waiting for shell: Justincheng12345 (waiting 27 minutes) [14:52:52] Ryan_Lane what is wrong on GPL3? [14:53:30] petan: nothing, except that OATHAuth is licensed GPL2 or higher [14:53:44] so, unless I change the license to GPL3, I can't take any changes [14:53:57] which makes what he did a totally dick move [14:54:47] is there some other extension with this functionality? [14:55:02] http://www.mediawiki.org/wiki/Extension:TwoFactorAuthentication [14:55:05] aha [14:55:14] I see f Extension:OATHAuth, [14:55:21] mhm he forked it... [14:55:24] lol [14:55:38] forked two months later [14:55:48] he didn't even bother to ask if I'd accept patche [14:55:50] *patches [14:55:55] that's why I hate forking of repositories [14:56:00] Change on 12mediawiki a page Wikimedia Labs was modified, changed by Danilo.mac link https://www.mediawiki.org/w/index.php?diff=705637 edit summary: [+15] +link to [[Wikimedia Labs/Tool Labs]] [14:56:11] it just produces a mess [14:59:02] AT&T is driving me insane this morning :P [14:59:22] I don't know if my earlier message made it through, but it's not really legal for him to relicense gpl-2+ code as gpl-3+, I don't think [14:59:44] well, it's or higher, so I think it is [15:00:16] you could take a gpl-2+ codebase and add some gpl-3+ new code to it [15:00:29] but you can't just change the license on the old code from 2+ to 3+ [15:00:53] and you're getting into really murky waters if the "new code" is in some of the same files as the old code [15:02:05] Change on 12mediawiki a page Wikimedia Labs/Tool Labs was modified, changed by Danilo.mac link https://www.mediawiki.org/w/index.php?diff=705642 edit summary: [+82] [15:02:51] bblack: hm. need a lawyer :) [15:03:31] I was always under the impression that the code could be relicensed higher, but that the old code was still gpl2 [15:04:05] of course now that I said that, I'm not as sure as I was 15 seconds ago [15:04:06] so any modifications afterwards would be gpl3, but previous codebase could be forked as gpl2 or higher [15:04:33] of course, this makes things *really* difficult to deal with [15:05:49] Ryan_Lane: Well, does TwoFactorAuthentication offer something OATHAuth does not? :-) [15:06:14] scfc_de: a couple config options? simplified code? [15:06:16] otherwise it seems not [15:06:19] The only thing I'm fairly certain of: if I publish gpl2+ code, and someone forks it and makes minor modifications, I don't think they can relicense that as 3+ and thus prevent the changes flowing back to my 2+ copy. [15:06:20] what's all murky is if they add a substantial body of new gpl3+ code and have different licenses on those files/chunks than on my original gpl2+ files/chunks, all within one project. [15:06:44] bblack: they can prevent the new changes from being moved back in, can they not? [15:06:49] the licenses are compatible to allow him to mix code like that, and I don't think you could then take his new gpl3+ additions back to your gpl2+ project [15:06:50] unless you want yours to be mixed-license by chunk/file as well [15:07:07] this confusion is one of the reasons I'm pissed off [15:07:11] I don't want to deal with this [15:07:34] Ryan_Lane: at the very least that's completely antithetical to the aims of the GPL in general. But I really don't think you can mod gpl-2+ code and then call the old+mods gpl-3+ only. [15:07:47] * Ryan_Lane nods [15:08:26] but there's a distinction to be drawn between "I made a few minor corrections/additions to the existing body of code" and "I mixed the old code with a separate chunk of brand new code of compatible licensing, into a single project with source files of different compatible licenses" [15:08:35] bblack: To further the aims of the GPL, new versions of the licence are developed. In fact, the FSF pushes GPL 3 quite hard. [15:08:51] yeah [15:09:02] you can of course use GPL-2 without the "or later" clause [15:09:27] But I can't imagine the intent was to let a forker force the original to upgrade his license [15:10:48] hmmm. i installed iptables, copied iptables.conf from tools-dev and ran "iptables-restore < /etc/iptables.conf", but iptables -L still just prints an empty list [15:10:59] The forker doesn't force anyone. Regarding removing "or later", I would ask a lawyer. The GPL is copyrighted. [15:11:12] IANAL though. Maybe I'm wrong, and maybe this is all part of the FSF's plan to get rid of GPLv2 [15:12:16] ah no. iptables -t nat -n -L look OK. [15:12:20] scfc_de: the v2-only variant of GPLv2 is a known thing [15:12:34] you can see a ref to it here on GNU's compatibility matrix: http://www.gnu.org/licenses/gpl-faq.html#AllCompatibility [15:12:41] That's basic market mechanism: If you can get better code in GPL than in public domain, do you accept the licence? The same goes if someone puts his improvements (or in this case "improvements") under GPLv3, and the original is GPLv2. [15:13:19] well, I suppose that when you merge GPLv3 changes back, the old stuff could still be GPLv2 or higher. But it will be an incredible pain to figure out in the future if new changes are a derivative work of the original code, of the merged in code, or off both, whic is probably not worth the headache if the proposed patch is not all that, and you don't want to figure out if you want to [15:13:19] relicense under GPLv3 [15:14:04] 3 down and 1 to the right in that matrix is the box that says "OK: Convey project under GPLv3", which is one of the possible outcomes here [15:14:10] on the one hand, my inner geek loves the intricities of licencing. On the other, "AAAARGH" [15:14:18] if you have a GPL-2+ project, and you want to bring in someone else's GPL3+ code, you have to change your project to 3+ [15:14:50] I think that kind of infection is a given [15:15:28] The question is whether someone can take any chunk of GPL2+ code and release it unmodified or modified with a new GPL3+ license just because of the + in the 2+ part. [15:15:35] martijnHH: well, it's all fairly nice and neat when a project decides to change its own licence. it's more of a pain when a fork comes along and decides to do it [15:16:04] bblack: What's the harm? [15:17:26] well the harm is the intent of the GPL is that public changes to GPL code can be re-incorporated in the upstream copy. If anyone can take an existing GPL-2+ codebase, make improvements, and release the combined old code + improvements as 3+ -only, they're preventing upstream from re-integrating changes unless upstream is willing to relicense their whole project (which often isn't possible - you need consent of perhaps a long list of authors) [15:18:22] My suspicion/contention is that it's not legal to relicense the 2+ code as 3+ just because you made a few changes to it. [15:18:51] but it probably is legal to have a project with several distinct sources files with different but compatible licenses, some of which are 2+ and some of which are 3+. [15:19:58] (in which case upstream can take back the small changes to the 2+ code, but can't incorporate the new, separate 3+ code that was added) [15:19:59] now that there are a bunch of license geeks together by the way, I'm looking for a copyleft license that allows linking without restriction like the LGPL, but I dislike some details in there. Is the OSL 3.0 a reasonable alternative, or are there compatibility issues with that? Or is it smart to dual license? [15:20:46] martijnHH: what details do you dislike? [15:21:12] bblack: Well, if upstream is better off with sticking to GPLv2 than merging the changes, so be it. I don't see a problem. You haven't explained *why* it should be illegal to relicence 2+ code as 3+, though. [15:21:49] hmmmm. Ryan_Lane, should the following work -- [15:21:55] jkroll@sylvester:/home/local-catgraph$ sudo su - local-catgraph [15:21:55] local-catgraph@sylvester:~$ mysql -h 192.168.99.1 [15:21:55] ERROR 1045 (28000): Access denied for user 'jkroll'@'10.4.0.216' (using password: NO) [15:22:10] why am i still 'jkroll' after sudo su :D [15:22:44] JohannesK_WMDE: you need to use the replica config [15:22:59] ah, ok.. [15:23:03] scfc_de: My basic premise for "why" is that the GPL (or any other license) doesn't change basic copyright law. Whoever authors the code owns the copyright and sets the licensing terms. If I author a large body of code and license it under GPL2+ terms, I don't think someone else has the copyright rights to relicense the whole thing as 3+ arbtirarily. It's not theirs to relicense. [15:24:03] Ryan_Lane is it technically possible to mount another vd to existing instance? [15:24:09] not that I need it [15:24:13] JohannesK_WMDE: mysql --defaults-file=replica.my.cnf -h enwiki.labsdb [15:24:16] bblack: Because you chose GPL2+? "This program is free software; you can *redistribute* it and/or *modify* it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, *or* (*at* *your* *option*) *any* *later* *version*." (Emphasis added.) [15:24:19] they could make changes and put those new changes under a new license, but... like I said it's murky to try to define a mixed codebase like that. separate files with separate copyright blocks work [15:24:23] petan: if we were using cinder, yes [15:24:31] what is that? [15:24:38] it's a block device service [15:24:40] I don't know if that was a nova thing or kvm thing [15:24:48] but kvm should be able to do that [15:24:54] Coren, what's the update on S7? [15:24:55] bblackL section 4c [15:25:16] it manages block device space and talks with nova for it to be mounted to the instance [15:25:23] ok how are current vda and vdb created? [15:25:27] Cyberpower678: I'm on the review of centralauth now, made no easier by the fact that it's poorly documented. [15:25:47] nova creates a disk and an ephemeral disk for all instances [15:26:04] the vd disk is the ephemeral [15:26:10] new images have very small rootfs [15:26:22] 10G [15:26:25] I had to move /usr to separate volume on tools-dev as we were on 100% [15:26:30] no it has just 4gb [15:26:31] Ryan_Lane: thanks. but there is no replica.my.cnf there yet and i have problems copying it: [15:26:37] petan: that was an old image [15:26:37] it doesn't have 10gb [15:26:42] jkroll@sylvester:~$ sudo cp replica.my.cnf /home/local-catgraph/ [15:26:42] jkroll@sylvester:~$ sudo chown local-catgraph /home/local-catgraph/replica.my.cnf [15:26:42] chown: changing ownership of `/home/local-catgraph/replica.my.cnf': Invalid argument [15:26:42] jkroll@sylvester:~$ ls -lh /home/local-catgraph/replica.my.cnf [15:26:42] -rw------- 1 root wikidev 50 Jun 5 15:25 /home/local-catgraph/replica.my.cnf [15:26:42] new images have 10 [15:26:43] jkroll@sylvester:~$ ls -lh replica.my.cnf [15:26:43] -rw------- 1 jkroll wikidev 50 Jun 4 17:16 replica.my.cnf [15:26:52] also these new boxes have apache server by default [15:26:53] why? [15:27:03] that seems… wrong [15:27:10] why can't i change the permissions o.O [15:27:11] I created a ticked for it [15:27:15] yeah, saw that [15:27:17] Coren: ^^ [15:27:22] JohannesK_WMDE: Only root can chown. [15:27:27] scfc_de: Does the right to redistribute it according to the terms of GPL3+ allow you to change the license on the code to 3+, though? [15:27:28] scfc_de: the right to take advantage of 3+ terms is implicit in the 2+ license, but you could take advantage of those without replacing the copyright block [15:27:39] JohannesK_WMDE: sudo -i please [15:27:39] JohannesK_WMDE: But it should be enough if the file is readable from local-... [15:27:59] JohannesK_WMDE: You need to use the /tool's/ replica.my.cnf not yours. [15:27:59] scfc_de: yes, i sudo'd [15:28:03] Coren: it looks like the replica config isn't there for the service group [15:28:08] jkroll@sylvester:~$ sudo -i [15:28:08] root@sylvester:~# chown local-catgraph /home/local-catgraph/replica.my.cnf [15:28:09] chown: changing ownership of `/home/local-catgraph/replica.my.cnf': Invalid argument [15:28:12] martijnHH: So you want LGPL-like terms, but you don't want to force people to display your copyright if they display other copyrights? [15:28:17] petan: how new is this instance with 4G? [15:28:31] petan: I checked that I fixed this a couple weeks ago with the latest image [15:28:32] o_O [15:28:36] Lemme go check. [15:28:43] and yes, there was no replica.my.cnf there, that's why i tried to copy it :) [15:28:45] @labs-instance tools-dev [15:28:51] Coren, cool and not cool at the same time. [15:28:56] @labs-info tools-dev [15:28:56] [Name tools-dev doesn't exist but resolves to I-0000069a] I-0000069a is Nova Instance with name: tools-dev, host: virt11, IP: 10.4.0.119 of type: m1.small, with number of CPUs: 1, RAM of this size: 2048M, member of project: tools, size of storage: 30 and with image ID: ubuntu-12.04-precise (deprecated) [15:29:10] yep. an old image [15:29:10] Ryan_Lane it uses this deprecated image [15:30:09] ... why in blazes is this owned by root? [15:30:25] mhm [15:30:57] ok Ryan_Lane so without this cinder thing, nova doesn't support any other volume to be mounted to existing instance? [15:31:01] Coren: if you mean /home/local-catgraph/replica.my.cnf, i 'sudo cp'ed it. but i don't understand why i can't chown it (as root). [15:31:48] petan: no [15:31:58] JohannesK_WMDE: Not sure about the chown (will look into it) but having copied it likely will have prevented creation of the "real" one. [15:32:06] bblack: yes [15:32:24] bblack: I think you miss the point of free software (at least in the FSF sense). It's to protect the program's *users*' "four essential freedoms". [15:32:38] * Coren checkes. [15:32:47] Coren: when should it have been created? i only copied it *after* i saw that it wasn't there. that is, just now. [15:32:56] Hm. [15:33:34] Well, that part (replica access from not-tools) hadn't been tested yet. :-) It should have been created ~60s after the group's home's creation. [15:34:25] dafu? Why is there no home for your service group? [17:18:30] <^demon> Ryannnnnnn, I have puppet changes for you :D [17:19:30] Ryan_Lane my bouncer died [17:19:34] !logs [17:19:34] http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ [17:19:36] wm-bot died as well [17:19:50] Ryan_Lane if you replied to my question can you repeat it :PP [17:22:16] petan: It might still be alive, on the other side of the split. [17:22:26] OH [17:22:30] then I killed it :( [17:22:42] I didnt know there was a netsplit [17:22:55] anyway it is back, just missing the longs [17:23:05] s/ngs/gs [17:24:06] Coren: what is the plan with -cg [17:24:20] can you merge my patch when you have time :o [17:24:24] petan: It will die soon; they moved to their own project. [17:24:42] ok it was useful to trace packages missin in config [17:24:45] of puppet [17:25:25] also there is high priority bug regarding nfs [17:25:45] petan: answer was no on adding more vd [17:25:46] petan: Indeed. ATM, S7 is my top priority, then more bugs. [17:25:56] cinder is required for that [17:26:07] petan: current image has a 10G root [17:26:11] Well, *fewer* bugs. :-) [17:26:14] Ryan_Lane I know but I asked bunch of open stack questions [17:26:26] maybe I already wasnt on freenode :/ [17:26:36] I didn't see them [17:26:39] aha [17:27:19] I wanted to know how many controllers / compute nodes we have and what kind of storage we use [17:27:25] 1 controller [17:27:28] 7 compute nodes [17:27:29] what version of open stack as well :P [17:27:30] SAS storage [17:27:34] openstack essex [17:27:40] whoops [17:27:42] I mean folsom [17:28:52] I mean swift / cinder [17:29:03] http://docs.openstack.org/trunk/openstack-compute/install/apt/content/terminology-storage.html [17:29:13] what we use if not cinder? [17:29:18] I thought it is essential par [17:29:20] t [17:29:22] we aren't using cinder [17:29:34] I know, hence the question [17:29:37] we don't offer block storage as a service [17:29:54] we're using nova's root/ephemeral disks that are always created [17:29:54] ok but we have some storage, what kind of storage it is then? [17:30:04] hmm [17:30:17] that isnt described in that docs [17:30:18] :/ [17:30:31] the image is the root disk [17:30:44] these disks are living on controller right? on the local storage? [17:30:55] they live on the compute nodes [17:31:03] not the controller [17:31:03] and gluster / nfs is separate physical server? [17:31:04] ah [17:31:11] so, when you go to create an instance... [17:31:12] so storage is on nodes? [17:31:22] the pictures on openstack show that storage is on controller [17:31:28] and nodes have very small hdd [17:31:32] just to run the essential OS [17:31:42] and it says 2 CPUs, 4096 MB RAM, 15GB root, 40GB storage... [17:31:47] 15GB root is the image [17:31:54] 40GB storage is the ephemeral disk [17:32:02] http://docs.openstack.org/trunk/openstack-compute/install/apt/content/compute-system-requirements.html [17:32:10] Cloud Controller node (runs network, volume, API, scheduler and image services) [17:32:20] Compute nodes (runs virtual instances) [17:32:31] petan: that's because it shows the controller running the volume service [17:32:40] I was thinking nodes run just the vm, dont host the storage [17:32:44] aha [17:32:49] volume service is that cinder? [17:33:08] yes [17:33:10] ok [17:33:23] and you can use "boot from volume" for the images and ephemeral disks, too [17:33:37] so each node we have also have a local storage? for vds or is that a network storage as well [17:33:47] it's all local storage [17:33:55] NFS/gluster is external [17:33:59] aha, so the local disks should be very fast [17:34:09] the local disks are SAS [17:34:14] hmm [17:34:20] that explains things [17:34:23] they are generally fast [17:34:33] yes I know I never had problem with local fs [17:34:43] but, since it's shared with a lot of other instances, it's inconsistently fast [17:34:54] but it depends on how loaded the host is... [17:35:01] that's what I mean ;) [17:35:09] unlike gluster / nfs which is one server per whole labs? [17:35:15] yep [17:35:17] or there are more of them? [17:35:22] I think you said gluster has more nodes? [17:35:23] well, gluster is 2 servers [17:35:26] but nfs is living on 1 [17:35:31] it was 4 previously [17:35:34] aha [17:35:39] yeah, but the NFS server has way more disks [17:35:56] and has two controllers writing to them [17:35:58] we should have a picture of all this :D [17:36:04] addshore how you created schema of tools? [17:36:06] yeah. probably should [17:36:20] he made some schema using some web app I think [17:38:09] that project on labs we have is like a small openstack? [17:38:45] @labs-project-info openstack [17:38:45] The project Openstack has 7 instances and 16 members, description: A project for implementing, testing, upgrading and improving our various OpenStack software. Also used to develop and test OpenStackManager. [17:38:48] this one :o [17:39:16] I suppose so [17:42:12] petan: yes [17:42:24] petan: all components installed on a single instance [17:42:57] good for developing against. ok-ish for testing againt [17:43:03] *against [17:49:28] I have made this image http://commons.wikimedia.org/wiki/File:Tool_labs_logo_with_text.svg . What did you think to use it as the Tool Labs logo? [17:51:45] danilo: Wouldn't it be a little too unconfortably close to "bastart child of labs + toolserver"? [17:51:50] bastard* [18:04:46] Coren: Tool Labs is in Labs and it will substitute toolserver, i can't see why its unconfortably. But i am very new in Tool Labs, maybe i don't have well understood something. [18:08:02] danilo I like it :3 [18:08:45] Coren tool labs /is/ bastard child :P [18:08:53] of toolserver and labs [18:08:54] :D [18:30:56] What is the correct thing to do again if the home directory has gone read only? [18:31:21] apmon: Complain here and hope that Ryan or I fix it. [18:31:28] What project? [18:32:18] maps [18:32:19] Warning: There is 1 user waiting for shell: Jarekt (waiting 0 minutes) [18:34:58] I wonder how long it'll take to move all of the home directories [18:35:25] zz_YuviPanda: for a few minutes yes [18:40:01] apmon, any better? [18:40:59] Yes, thanks [18:41:34] Have you designed gluster deliberately to keep you busy... ;-) [18:42:53] It's actually much better right now -- a few months ago that would've caused a splitbrain and possible data corruption. [18:43:01] Now it's just read-only… pretty painless [18:45:50] Warning: There is 1 user waiting for shell: Jarekt (waiting 13 minutes) [18:54:31] <^demon> I'm trying to make an instance but it's giving me "Failed to create instance." [18:54:38] <^demon> (Just created 3 others just fine) [18:55:38] ^demon, maybe you're over quota? What project? [18:55:51] <^demon> solr. [18:58:16] Hm, something is broken and I can't check quotas [18:59:00] andrewbogott: it changed [18:59:11] different syntax for nova-manage? [18:59:20] no [18:59:22] you just can't list [18:59:23] https://wikitech.wikimedia.org/wiki/Help:Nova-manage [18:59:28] OS_TENANT_NAME=bots nova quota-show bots [18:59:30] yes, seriously [18:59:58] andrewbogott: patch for showing it in the interface is waiting :) [19:00:23] true [19:00:38] ^demon: Quota is 20 cores, 10 instances. Which you haven't hit het. [19:00:43] yet. So I don't know what's happening [19:00:58] andrewbogott: patch for showing it in interface will also show you current usage [19:01:17] * andrewbogott opens gerrit [19:01:19] we can merge that in and deploy it :) [19:01:34] the damn nova command doesn't have an option for showing the usage [19:01:44] though you *can* get it, but it's ugly [19:02:25] * ^demon just wants his instance [19:02:27] <^demon> :) [19:02:36] oh, wait [19:02:41] the nova command *does* have it [19:02:46] OS_TENANT_NAME=testlabs nova absolute-limits [19:03:10] ^demon: well, for that we need to see which quota you are hitting, so that we can raise it [19:03:39] Want me to deploy? [19:03:43] it's likely ram [19:03:45] andrewbogott: sure [19:04:39] <^demon> Ryan_Lane: ty <3 [19:05:00] ^demon: it'll work now [19:05:09] cores [19:05:23] ah [19:05:24] indeed [19:05:25] tat too [19:05:27] *that too [19:05:46] ok. now it'll actually work :) [19:05:49] <^demon> solr project is about to get way more active, manybubbles and I are going to start iterating on our Glorious New Searching Future. [19:06:02] <^demon> \o/ [19:06:17] Is manybubbles the new Ram? [19:06:21] It will be glorious! [19:06:28] I'm the new me. [19:06:40] fair enough [19:10:04] <^demon> Ryan_Lane: Also I've totally taken over the performance project and it's basically hiphop now. I gave all the facebook guys access so they can try things. [19:10:16] cool [19:10:22] when will https://www.mediawiki.org/wiki/Wikimedia_Labs/Agreement_to_disclosure_of_personally_identifiable_information and https://www.mediawiki.org/wiki/Wikimedia_Labs/Terms_of_use become effective? [19:10:39] waiting on finalization from legal [19:10:55] <^demon> Reminds me, I need to poke legal about something. [19:10:56] ETA? [19:11:27] feels strange having to agree on something you can't agree on due to it's beeing a draft [19:14:03] well, you can agree to the draft [19:14:12] we'll send a notification of change when its finalized [19:14:33] if you disagree with the changes, you're welcome to give feedback or not continue using the service [19:14:33] Is there a way to increase the root storage of an existing instance? [19:14:41] apmon: not really [19:14:50] apmon: is your instance's root 4G? [19:14:59] if so it was a broken image [19:15:02] it should be 10GB [19:15:06] yes [19:15:26] So I'll need to recreate the instances? [19:15:34] building a new instance will fix that [19:15:34] unfortunately, yes [19:15:34] OK, thanks [19:15:34] yw [19:16:11] Ryan_Lane: well, true I can "agree" with it, but I can't legally agree with it [19:16:17] or how to put it [19:16:30] I can't agree with it bindingly [19:19:07] Ryan_Lane: regarding the "Terms of use" I do have a couple of points/questions [19:19:21] doesn't point 2 and 3 say the same thing basically? [19:19:25] you'd be agreeing with the current terms [19:19:59] point 2 is covered by point 3 [19:20:33] no. one is content and the other is software [19:20:55] does point 5 prohibit someone to make a service to allow people in china to access/edit via tor? [19:20:56] they are generally different sets of licenses [19:21:01] software is content afaik [19:21:52] * aude wonder why we have role::wikidata-repo-latest::labs and role::wikidata-repo::labs [19:22:35] -latest installs the latest and does periodic fetches [19:22:51] role::wikidata-repo::labs installs the current git trunk and then leaves it alone. [19:23:05] trunk... [19:23:48] andrewbogott: ok [19:24:09] i think we usually "leave it alone" for our test instance [19:24:17] for people to try, etc. [19:24:28] trunk, tip, latest? I don't know what to call it [19:25:05] fine as it is [19:25:24] now our puppet scripts are broken .... [19:25:32] Ryan_Lane: also point 9, does it mean the text must be displayed on all possible pages, including say json responses (or in case where the service doesn't have a "page", any only json), or does it mean just one page somewhere? [19:25:35] i "fixed" it last time but need to submit it to gerrit [19:25:44] and test it [19:25:45] and only json* [19:26:11] andrewbogott: tip I think is the general definition in git [19:26:24] aude, I can work with you to fix things right now if you like. [19:26:51] hmmm, ok [19:27:10] i had to change something in the mediawiki single node module [19:27:18] no idea if it's the right thing to do [19:27:33] * aude prepares patch [19:27:53] A totally other question, regarding database, is the accessible database layout identical to vanilla mediawiki? [19:28:14] coudln't find any details on the wiki [19:28:36] I don't really see the terms changing much [19:29:13] Ryan_Lane: just saying that as long it's draft, it can't practically be binding [19:30:12] maybe being clarified more [19:30:15] Coren: so. I'd like to do an initial rsync of home directories from gluster to nfs [19:30:15] skipping, of course tools, deployment-prep, and catgraph [19:30:15] petan: did bots switch from gluster to nfs yet? [19:30:26] no [19:30:48] I think I should first announce a downtime [19:30:54] anyone? regarding the db [19:30:55] because still lot of people run bots there [19:30:59] I don't see how point 2 is covered by point 3 [19:30:59] Ryan_Lane: That's basically how I did it; rsync once first to get the bulk. [19:31:11] AzaToth: no, because we dont have access to all views [19:31:13] I don't agree that software is content [19:31:18] er, all columns* [19:31:20] AzaToth: It's "mostly" identical, except for tables that aren't there at all. [19:31:38] AzaToth: tl;dr: the views that you do have are identical to the underlying tables. [19:31:38] also, not all databases have the contenthandler stuff set up like vanilla mw would have [19:31:41] Coren: any views layers like toolserver had? [19:32:02] ugh. freenode is so laggy [19:32:06] I notice [19:32:19] I wondered how Ryan_Lane manage to type 4 lines in a row ツ [19:32:54] AzaToth: displaying text, via point 9 is difficult [19:33:03] Ryan_Lane: http://imgur.com/cYcEkGc [19:33:18] I assume my text is in wrong order ヾ [19:33:21] AzaToth: Yes, it's views. There are a few tables with alternative views for indexing reasons; the most sallient being revision which has a revision_userindex [19:33:21] just don't collect personal info, and you don't need to worry about it :) [19:33:43] Ryan_Lane: heh [19:34:13] I think the text is in the correct order of what I wrote :) [19:34:22] it's not threaded properly [19:34:45] Coren: I'm in the work of making a rails module for database access, and need to make sure what's different between normal mw and labs [19:36:20] AzaToth: For the most part, it should be just some tables missing and a few extra views that are the same in substance just having different indices. [19:37:21] any list of tables missing? [19:37:22] Ryan_Lane: At the moment, the terms read as if a developer needs to include the two (rather large) boilerplates in *all* web pages. [19:38:05] scfc_de: *all* publicly viewable products [19:38:19] I don't know when that encompass(?) [19:38:23] what* [19:38:35] AzaToth: Do you think these are more or less? :-) [19:38:53] AzaToth: well stuff like cu_* [19:39:06] legoktm: true [19:39:33] idk, shouldnt be that hard to get a list. just run show tables on labs and then on your vanilla install and diff them :P [19:39:44] I would assume we don't have access to user.user_password [19:41:47] Coren: can I test on tools-login, or should I make a temp project? [19:41:48] AzaToth: You'd assume quite correctly. Some columns are nulled even before they get to the replicas, and the actual views null others conditionally. [19:42:09] AzaToth: For simple looking around, you can do it from your user account without problem. [19:42:37] AzaToth: Our scheme never /removes/ columns, though -- if the table is there it's all there. It only nulls values. [19:42:49] ok [19:43:13] could you add ruby1.9.1 to tools_login (want to use homesick :-P) [19:43:23] https://github.com/technicalpickles/homesick [19:49:39] Coren: right, so I'd imagine it's best to mount the NFS server from the gluster server [19:49:53] Coren: can you export the mounts to labstore1? [19:49:55] AzaToth: Precise supports 1.8.7; I know it's a little older, but we really prefer keeping the infrastructure as close to distro base as possible for security upgrades. [19:50:14] I'd imagine the script needs to be modified for a global ip? [19:50:41] Ryan_Lane: I actually did it from the project, but I see no reason to not open all to labstore1 actually. [19:50:51] I'd like to kill gluster homedirs this week [19:51:12] I don't want to do it from the project [19:51:18] there's 160 of them ;) [19:51:23] and some projects have no instances [19:52:03] I'm going to do an initial rsync, then set a date for transition, then mark all gluster home directories read only [19:52:06] then do another rsync [19:52:09] then change ldap [19:52:15] then people can reboot as they wish [19:52:29] Ryan_Lane: No, there's a static exports: /etc/exports.d/ROOT.exports we can add labstore1 there. Gimme a sec. [19:52:30] [19:52:35] ah. sweet [19:52:55] I <3 how much easier NFS is to deal with [19:54:26] Ah, that'd complicate things. I forgot we had to dance with fsids. [19:54:31] * Coren ponders. [19:54:37] ah. right [19:55:03] actually, they won't be in use [19:55:03] Not that hard, actually. I can create the exports from a simple grep if you give me a minute. Doesn't have to be highly dynamic since it's a one-shot deal. [19:55:07] so it doesn't matter [19:55:23] I'm going to mount, rsync, umount [19:55:37] and do the same later, then expose them to clients [19:55:43] so fsid isn't an issue here [19:56:04] Hm. Iff you are absolutely certain you won't accidentally mount two. :-) [19:56:35] It'll only take me a minute to do it safe. 10.0.0.41 right? [19:56:46] yep [19:56:50] do it safe, then :) [19:59:30] Ryan_Lane: All done. You should be able to mount all the /exp/$project from labnfs [19:59:39] cool. thanks [20:00:23] Ryan_Lane: Hint: "-o port=0,nfsvers=4,hard,sec=sys" [20:00:37] Oh! Wait. You need to kill idmap too. [20:01:02] echo 1 >/sys/module/nfs/parameters/nfs4_disable_idmapping [20:01:16] do I also need to stop the idmap service? [20:01:35] Ryan_Lane: No need; as long as it knows when it mounts you're okay. [20:01:39] * Ryan_Lane nods [20:03:46] <^demon> Ryan_Lane: Can I get some code review? [20:03:54] gimme a bit [20:04:09] <^demon> Okie dokie [20:07:51] inspired by Qcoder: Know what would be nice? read-only access to dumps on all nodes in, say /dumps [20:08:16] Coren: http://paste.debian.net/8669/ [20:08:28] do they already happen to be available? [20:09:01] Krenair, have you already tested https://gerrit.wikimedia.org/r/#/c/65436/ on nova-precise2? [20:09:14] Coren: do you mean that "universe" is not supported? [20:09:29] AzaToth: No, I mean that I hadn't checked in universe. :-) [20:09:54] heh [20:10:26] I think so, yes andrewbogott [20:10:42] Would you like me to merge and deploy it now? anything left to do? [20:11:07] AzaToth: And I note, with pleasure, that it doesn't conflict with 1.8. Will add shortly. [20:11:37] Krenair: ^ [20:12:56] root@labstore1:/mnt# mount -o port=0,nfsvers=4,hard,sec=sys labnfs.pmtpa.wmnet:/exp/bastion/home /mnt/nfs [20:12:56] mount.nfs: mounting labnfs.pmtpa.wmnet:/exp/bastion/home failed, reason given by server: [20:12:56] No such file or directory [20:12:58] martijnHH: Look in /public/datasets/public. [20:12:58] Coren: :( [20:13:02] I should probably test it again... [20:13:25] If I could remember my password... guess I'll have to go reset again [20:13:42] Krenair: ok :) Just post a note in gerrit when you're feeling confident. It looks fine to me. [20:13:53] thansk scfc_de [20:13:57] or thanks [20:14:04] Ryan_Lane: drwxr-xr-x 3 root root 38 Jun 5 19:17 /exp/bastion/home/ [20:14:07] o_O [20:14:14] yep [20:14:15] It's there. [20:15:00] same issue with labnfs.pmtpa.wmnet:/exp/bastion [20:15:13] Aha. But for some reason it wasn't in the list of exports at all. I wonder why. [20:15:25] I saw it in the list [20:15:34] No, it is indeed. [20:15:43] hm. [20:15:58] Yep: /exp/bastion 10.0.0.41 [20:16:40] * Coren boggles a bit. [20:17:28] ah [20:17:38] Hey Ryan_Lane, can you give testkrenair shell rights on nova-precise2? [20:17:43] Coren: labnfs.pmtpa.wmnet:/bastion/home [20:17:47] virtual filesystem ;) [20:17:51] I worked around this last time by commenting the check for loginviashell out, but I'd prefer to do this properly :p [20:17:52] Oooo! True. No /exp! [20:18:05] * Coren facepalms. How did I forget that? [20:18:18] and not have to fiddle with it in future [20:18:23] yeah. one sec [20:19:16] Krenair: what's the actual username? [20:19:17] andrewbogott: https://gerrit.wikimedia.org/r/#/c/67144/ [20:19:22] in the wiki [20:19:26] if you have any ideas or suggestions [20:19:42] Ryan_Lane, 'TEST Alex Monk' [20:20:02] any other permissions I should give the user? [20:20:13] 0/ ChrisGualtieri [20:20:21] that user now has shell [20:20:38] Hi, first time here for me. [20:21:28] I was hoping someone could help me out; I want to get a list of all the articles on Wikipedia with possible typos so they can be corrected. [20:22:03] aude: That looks fine, want me to merge it as is or do you want to work on it more? [20:22:11] http://ganglia.wmflabs.org/latest/ looks strange. [20:22:21] ChrisGualtieri: As a rule, "possible typos" is so immensely vague as to be impossible to define. Finding /specific/ common typos should be simple enough. [20:22:35] it's still broken [20:22:36] Using the list of Regex [20:22:39] per the errors i got [20:22:41] ChrisGualtieri: Do you know of a former task that did that? [20:22:46] TypoScan [20:22:57] I personally cleared the backlog there many months ago [20:22:59] aude: One thing to know about those classes is that you'll need to 'sudo su -' before doing puppetd -tv. [20:23:12] hmmm ok [20:23:16] If you just do 'sudo puppetd -tv' then some things break [20:23:21] But not the things you're reporting... [20:23:25] i did sudo -s [20:23:49] hm, ok. [20:23:52] thanks Ryan_Lane [20:24:28] aude, I feel like I've seen those messages but I'm not sure why they happen :( [20:24:35] andrewbogott, "Alex Monk added you to project Nova Resource:Testing" - yup, works [20:24:37] If you have ideas let me know! I will also try to take a look. [20:24:47] Krenair: Cool, I will merge! [20:24:52] I don't actually need the functionality of TypoScan; I am prepared to spearhead the task from a single list [20:25:09] hmmm, ok [20:25:30] i can certainly investigate [20:25:48] without these scripts running, our test repo is useless so no point [20:27:10] Krenair, deployed but I can't make it do anything :( [20:27:28] aude, same failure if you run them separately on the commandline? [20:27:39] i can try [20:27:42] Coren: Do you think it can be done? I got the list for Regex; is the current AWB rules [20:27:56] maybe it's just not finding those files [20:27:56] ChrisGualtieri: Have you asked Reedy/MaxSem/mboverload? It's probably much easier for them to judge what needs to be done. [20:27:59] maybe something got moved [20:28:01] andrewbogott, you mean it's failing in production? [20:28:11] Failing or I'm misunderstanding what it should do [20:28:11] or you haven't been able to test it? [20:28:23] ChrisGualtieri: It shouldn't be too hard, the titles can be extracted easily enough from the database (it's in the page table) [20:28:33] Mboverload has no activity since october 2012, Reedy replies, but never queues it up [20:28:46] ChrisGualtieri: For simple regexp searches over the dumps, I think you can use pywikipediabot, and IIRC gwicke had a javascript thing as well for that. [20:28:48] Cause it takes a fucking age to do it [20:29:03] you add someone to a project, and they get a notification andrewbogott [20:29:11] Including if I add myself, right? [20:29:18] no [20:29:20] Reedy: The scan or the migration? [20:29:30] why would you want a notification for adding yourself to a project? :P [20:29:31] The scan [20:29:44] Well, fair point. [20:29:51] I just added you to rt-testing. Get a notice? [20:30:01] andrewbogott: now i see some of the issues i had last time when I "manually" changed the puppet files [20:30:04] notice: /Stage[main]/Mediawiki_singlenode/Exec[import_privacy_policy]/returns: PHP Notice: Undefined index: SERVER_NAME in /srv/mediawiki/LocalSettings.php on line 30 [20:30:17] problem with the way the local settings are vs. the autogenerated ones [20:30:26] Reedy: Is the URL to Toolserver hardcoded in the DLL? [20:30:32] Yes [20:30:34] But that's easily fixed [20:30:41] And it's a toolserver mmt [20:30:49] so an account can be added to it no problem [20:31:26] andrewbogott, yep. [20:31:33] Krenair: ok then :) [20:31:41] Andrew Bogott added you to project Nova Resource:Rt-testing
1 minute ago [20:31:54] I got a new GPU, but the database scanner doesn't seem to work in my AWB with the Regex rules and it doesn't seem to utilize the GPU at all. [20:32:10] No, it wouldn't use the GPU [20:32:46] Reedy: If you have me ("timl" on Toolserver) added to the MMT, I'll take a look what's needed to migrate it to Tools. [20:32:57] Very little [20:33:08] It's a php script as a frontend to a mysql database [20:34:00] All that part does is serve a list and record completion [20:34:16] the list building is a c# app that works with the regex list and a database xml dmp [20:34:31] Does that run on Toolserver as well? [20:34:46] heh. the whole import of enwiki, frwiki, and a bunch of smaller wikis just ran in about 12 minutes :) [20:35:11] scfc_de: is it a known problem that ganglia doesn't work? [20:35:26] ahhhhh i take that back. working again [20:35:44] No [20:35:57] JohannesK_WMDE: Not to me, but Ryan_Lane is the one to ask. But when you asked some time ago, I saw it down as well. [20:35:57] Krenair: oh. awesome. I see you merged in a new echo notification :) [20:36:20] ganglia is down? [20:36:34] ugh [20:36:42] what's up with the scripts on that box? [20:36:42] Reedy: We have mono on Tools, petan uses it IIRC. So it shouldn't be a problem to use the grid to do the scan. [20:36:51] Maybe, maybe not [20:36:55] It's a gui app currently [20:37:01] so you'd need to write a cli interface to it [20:37:19] Ryan_Lane: Some time ago it barfed "There was an error collecting ganglia data (127.0.0.1:8654): fsockopen error: Connection refused". [20:37:22] http://toolserver.org/~awb/mono/ [20:38:03] ok. the sources are repopulating now [20:38:15] at some point I need to clean up the labs config for this [20:38:30] Reedy: Is that the source?! [20:38:33] No [20:38:52] everything is reporting again [20:38:57] http://svn.code.sf.net/p/autowikibrowser/code/ [20:39:12] may take a bit for it to work properly [20:39:57] Reedy: Thanks, I take a look. [20:40:26] petan: schema of tools??? [20:41:00] I'm not good at programming... but I really do like working on backlogs like that [20:41:06] addshore picture [20:41:34] scfc_de whats up [20:41:43] mono :p [20:43:26] petan: did you switch bots to using nfs? [20:43:30] Ryan_Lane: I'm checking out labs db replicas for the first time and wondering if there are plans for user preferences (anonym of course) [20:43:33] no [20:43:37] Ryan_Lane no [20:43:38] Toolserver has user_properties_anonym [20:43:43] I think I should first announce a downtime [20:43:45] ok, so I should rsync gluster home dires to nfs ones [20:43:52] ok [20:43:54] | up_property | up_value | ts_user_touched_cropped [20:44:11] Krinkle: not sure. Coren is the guy to ask [20:44:14] k [20:45:02] Coren: [20:45:33] ok. homedir nfs is being populated :) [20:46:39] Krinkle: Would be necessary for example for https://en.wikipedia.org/wiki/Wikipedia:Database_reports/User_preferences as well. [20:46:52] scfc_de: Exactly [20:47:01] I run queries like that quite often [20:48:05] Krinkle: user_properties is in the gray area that currently defaults to off. It's not accessible from a normal user on-wiki, so there is an expectation of privacy we probably won't break without project-level consensus that it's okay. [20:48:36] Heck, it's not even accessible from admins. :-) [20:48:38] Coren: I said "anonym of course" that means anonimized version of the data. [20:49:00] not tied to a user account, only statistics about it and only for a subset of the properties naturally [20:49:25] Toolserver has an empty user_properties, but has a view "user_properties_anonym" with columns up_property | up_value | ts_user_touched_cropped [20:49:39] afaik that is not controversial [20:50:26] Krinkle: Perhaps. That'll need more discussion, at the very least. "Available on the TS" should not be used as a sign of okayness -- there are a couple of things that could be done on the TS that really shouldn't have been. :-) [20:51:07] And there are technical issues with replicating data that has its primary key elided. [20:51:23] I know, but I don't think this is one of them. I mean I've never heard any sign from anywhere complain about this. And contrary to some data "available" on TS, this doesn't even have opt-in suggestion in the guidelines. It is publicly exposed in various tools. [20:51:43] Coren: I imagine the replication is not a subset (even now), only the view is a subset, right? [20:51:52] Or are the sensitive columns currently not replicated? [20:52:19] Nope. Sensitive columns are elided even before they land on the labsdb, and sensitive tables never get there in the first place. :-) [20:52:25] e.g. right now the user table is invisible i labs, but I assume that data is replicated, right? [20:52:32] ^demon: ok. [20:52:36] Hm.. interesting [20:52:37] ^demon: which changes? [20:52:44] just https://gerrit.wikimedia.org/r/#/c/63514/ ? [20:52:46] Krinkle: Actually, the user table is visible -- not all of it of course. [20:53:16] Coren: So where do I make a request for this? I imagine various more requests like it for other pieces will follow soon as tools migrate and depend on database info existing [20:53:31] <^demon> Ryan_Lane: https://gerrit.wikimedia.org/r/#/q/status:open+project:operations/puppet+branch:production+topic:gitblit,n,z [20:53:43] "No" is an acceptable answer, but there needs to be a place for that as well :) [20:53:52] Coren: Sorry to bother you, but in your job title "Operations Engineer - Tool Labs (Contractor)", I assume the Contractor assumes to the whole rather than just the "Tool Labs" part? [20:54:11] petan: yee I made that terrible picture ;p [20:54:20] jarry1250: I believe it does. [20:54:53] Krinkle: Well, we don't /have/ a process now, it's all very ad-hoc. I suppose figuring the answer to that question "how" would be a good first step. :-) [20:55:50] petan: do you like the little picture? ;p [20:55:56] Reedy: Is it possible to restart the TypoScan project? [20:56:04] yes [20:56:09] good :> [20:56:20] anything you think it is missing? Or a way it could be better? :> [20:57:48] ChrisGualtieri: That's what I'm looking into. [20:58:25] Thank you. I'd be very happy to work on it again [21:00:14] AzaToth: ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] [21:00:19] chrismcmahon: Checked out https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help yet? [21:00:28] wait, I meant ChrisGualtieri [21:00:31] sorry for the misping [21:01:47] martijnHH: there is a lot more information on that page since I read it last :) [21:02:23] :D [21:06:02] ChrisGualtieri: Without much work, yes [21:06:37] I'm sure I said before (I recall talking to MaxSem about it too), that running the scanner on my desktop at home was very inefficient [21:06:49] most of the system being idle, and it being slow, taking ages [21:07:20] Yeah; back in like Sept. 2012 I remember that [21:07:49] Reedy: Well, on the grid you wouldn't have to mind it taking ages :-). [21:08:02] Sure [21:08:06] Lol someone left me an irc member... didn't even know freenode had ms [21:08:13] I wouldn't have minded so much if my pc was busy doing the work [21:08:20] But running it at mostly idle is wasteful [21:08:27] Yep. [21:31:55] Coren: https://wikitech.wikimedia.org/w/index.php?title=Special:NovaProject&action=displayquotas&projectname=tools \o/ [21:32:38] \o/ [21:33:07] I like how it makes sure you know how many cores you are using by saying it twice. :-) [21:33:20] shit [21:33:29] didn't notice that ;) [21:33:36] * Ryan_Lane goes to push in a change [21:35:19] next time I do a deploy that'll be fixed :) [21:36:32] have to be a project admin to displayquotas? :/ [21:36:38] yep [21:36:42] which makes sense ;) [21:37:00] you'll never hit quotas unless you are a projectadmin [21:37:07] ah, of course. [21:41:20] https://github.com/dotcloud/openstack-docker < Could be interesting for tools style usage where it's running packaged up ruby/python apps on a set port [21:42:42] I don't see why to use docker rather than just using containers via scheduler hinting in openstack [21:42:55] oh, right, it also does app dependencies [21:43:20] so, salt-cloud + containers via scheduler hinting :) [21:43:25] It's containers with some more magic for easier handling of deps etc [21:43:46] you could just have salt do it, in theory granted :P [21:43:52] or fabric [21:43:59] petan: do we have a tools labs group for huggle ? [21:44:00] or capistrano [21:44:02] or juju [21:44:05] Talking of salt - you tried master/master yet, it's pretty sweeeeet =D [21:44:09] it's another orchestration framework [21:44:15] wait. it has master/master now? :) [21:44:20] yep [21:44:28] well I don't think it's a release yet - it's in head [21:44:30] so .16 [21:44:31] I have /not/ tried that [21:44:38] but I want it :) [21:44:44] I need it for eqiad/pmtpa labs [21:44:50] You have to manually setup the keys/sync files atm [21:44:56] that's easy enough [21:45:07] I'm running 3 masters, shared key, cron'd git pull of settings for w0rk [21:45:20] Same as puppet works, pretty much [21:46:19] The only thing I don't totally like, is the master that sends the command owns that - so returns etc, run though that server. They don't loadbalance back across it it fails in the middle [21:49:15] well, that's not great, but it's better than before [21:52:21] I can haz foodz? [21:53:44] * Damianz gives Coren|Dinner a cookie [21:54:10] Coren's lucky, everytime Ryan was food someone ate him [22:34:19] Does anyone know about mwreview.wmflabs.org/ , for "The user experience review queue" ? I don't know how to search for hostnames on wikitech. [22:35:26] spagewmf: https://wikitech.wikimedia.org/wiki/Nova_Resource:I-000002ae [22:35:32] you need to do an "everything" search [22:35:45] and search by the hosts' IP [22:35:50] 18 $> host mwreview.wmflabs.org [22:35:51] mwreview.wmflabs.org has address 208.80.153.249 [22:35:57] https://wikitech.wikimedia.org/w/index.php?title=Special:Search&search=208.80.153.249&fulltext=Search&profile=all&redirs=1 [22:40:50] Ryan_Lane: ah, OK. I thought I'd have to semantic search for instance name. It's a shame "Everything" search doesn't find DNS names [22:41:10] the everything search just searches page content [22:42:43] jorm, is https://www.mediawiki.org/wiki/User_experience_review_queue and its machine mwreview.wmflabs.org still in use? [22:43:03] yes and no. [22:43:22] i'm still using mwreview.wmflabs.org, but it's a host for unicorn. [22:43:33] i was barely aware that the "review queue" existed. [22:44:12] lore-mining on mediawiki.org :) [22:44:52] Ryan_Lane, would it be possible to store the *.wmflabs.org domain in the page content? would be nice to be able to search by it... [22:45:06] Krenair: maybe [22:45:28] we need to restructure the instance pages anyway [22:45:48] Krenair, just use https://wikitech.wikimedia.org/wiki/Special:Ask , problem solved :) [22:48:27] special:Ask is… not very fun to use [22:49:21] indeed. Besides, it seems DNS name isn't a property, though [[Instance name::mwreview]] finds something [22:50:43] it's not in the page content anywhere :) [22:52:04] if it's not page content and not a Special:Property, then ??! It's magic, hence the unicorn symbol [22:52:10] spagewmf, I have no idea how to use that. [23:24:53] * AzaToth enters snitch mode [23:25:14] Must inform you that http://tools.wmflabs.org/xtools/pcount/ is violating the TOS§9 [23:25:25] * AzaToth exits snitch mode [23:26:02] AzaToth: in which way? [23:26:54] "The account creation text and the agreement to disclosure of personally identifiable information must be displayed to end users of any publicly viewable products." [23:27:06] ah [23:27:38] that's only necessary if it collects PII or if it lets you make accounts [23:27:49] and it looks like it does neither [23:28:33] Ryan_Lane: 1. I didn't know that, 2. http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29#X.21.27s_Edit_Counter [23:28:57] Ryan_Lane: shouldn't the TOS explicitly state that then? [23:29:25] the information is 100% publicly accessible [23:29:33] what do they want privacy *from*? [23:29:39] I don't know [23:30:03] the "Wikimedia Labs/Agreement to disclosure of personally identifiable information" states "By creating an account in this project and/or using Labs Services" [23:30:21] which I assume means it includes all services, not only those harvesting personal information [23:30:29] that needs to be clarified [23:30:49] it's meant to be shown when collecting info, or when creating accounts [23:31:31] * AzaToth thought the "account creation" was actually referred to the process of making labs accounts [23:32:03] it also refers to creating accounts within applications hosted in labs [23:32:13] also? [23:32:37] well, I guess not also [23:32:48] since wikitech falls under the normal privacy policy [23:32:50] it's the same in the account creation text: "By creating an account in this project and/or using other wmflabs.org Services" [23:33:12] tools also falls under the normal privacy policy [23:33:28] other wmflabs projects do not, because volunteers have full root [23:35:56] but as vthe texts says "and/or using other", doesn't it implies all publicly accessible pages must bear the notice? [23:36:11] again, that needs to be clarified [23:36:15] ok [23:36:26] I thought you referred to the Terms of use line [23:36:46] which I think needs to be clarified [23:37:50] as it stands now, I need to smack it on all pages, even if said service doesn't harvest personal information nor allows people to make accounts [23:38:29] needs to add a "if reasonable" or "if relevant" [23:40:07] Also imo TOS§7.5 should have a link to examples of good hashing algorithms. Not all devs are masters in cryptography and knows what a "!strong hash" is