[00:09:57] anyone knows what the hashs table is? [08:47:54] Warning: There is 1 user waiting for shell: Steenth (waiting 0 minutes) [09:01:26] Warning: There is 1 user waiting for shell: Steenth (waiting 13 minutes) [09:01:27] Warning: There is 1 user waiting for access to tools project: Steenth (waiting 6 minutes) [09:14:55] Warning: There is 1 user waiting for shell: Steenth (waiting 27 minutes) [09:14:56] Warning: There is 1 user waiting for access to tools project: Steenth (waiting 19 minutes) [09:28:23] Warning: There is 1 user waiting for shell: Steenth (waiting 40 minutes) [09:28:24] Warning: There is 1 user waiting for access to tools project: Steenth (waiting 33 minutes) [09:40:40] !tr Steenth [09:40:40] request page: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Steenth?action=edit talk page: https://wikitech.wikimedia.org/wiki/User_talk:Steenth?action=edit§ion=new&preload=Template:ToolsGranted link: https://wikitech.wikimedia.org/w/index.php?title=Special:NovaProject&action=addmember&projectname=tools [09:41:48] Warning: There is 1 user waiting for shell: Steenth (waiting 53 minutes) [09:41:49] Warning: There is 1 user waiting for access to tools project: Steenth (waiting 46 minutes) [10:42:28] can someone tell me how to connect to the mysql-db via python? [10:43:18] con = _mysql.connect('localhost', 'USER', 'PASS', 'enwiki.labsdb') // results in Error 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) [10:45:41] lbenedix1: the mysqldb isnt at localhost ;p [10:45:59] I'm guessing python mysql.connect is mysql.connect(server,user,pass,db) ? [10:46:04] good to know [10:46:30] If so it should be mysql.connect('enwiki.labsdb',user,pass,enwiki_p) [10:47:52] great! [10:48:03] did it work? :D [10:48:12] yepp [10:48:18] gd gd :) glad I could help :) [10:48:25] !rq Steenth [10:48:25] https://wikitech.wikimedia.org/wiki/Shell_Request/Steenth?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Steenth?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Steenth [13:24:59] YuviPanda ping [13:25:04] pong [13:25:05] did you install the syslog [13:25:07] :o [13:25:09] on bots [13:25:14] no [13:25:17] mhm [13:25:19] exam tomorrow, just popped in for a minute :) [13:25:21] ok [13:25:25] i'll be out till monday, I think [13:25:33] ok [13:38:32] !log tools petrb: created instance for central syslog [14:00:33] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 0 minutes) [14:13:55] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 13 minutes) [14:27:24] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 27 minutes) [14:40:53] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 40 minutes) [14:54:27] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 54 minutes) [14:57:56] Coren ping [14:58:07] pong? [14:58:17] how do I turn instance into using nfs [14:58:41] I created tools-syslog for this https://bugzilla.wikimedia.org/show_bug.cgi?id=48846 I would like it to use nfs instead of gluster [14:59:09] it's basically going to host a scribe daemon which the tools /can/ use to collect central logs [15:00:09] I created a service group local-logs and I think it would be best if we stored the logs somewhere inside its service home directory, per tool. So there would be some gui interface in tools.wmflabs.org/logs but for that I need it to have access to nfs [15:02:50] Coren? :o [15:03:04] Ah, it's fairly simple: [15:03:40] I would like to do that while you are here so that in case it stop accepting my key which will likely happen you can fix it [15:04:53] add the role::labsnfs::client to the puppet class for the syslog server. You /did/ add syslog server as a puppet class in modules/toollabs right? :-) [15:05:11] not yet, I did that for -mc server thought and it didn't get merged anyway... [15:05:36] sec [15:06:02] (Actually, you should make that syslog class include toollabs::infrastructure too [15:06:16] But yeah, role::labsnfs::client is the one you want. [15:06:21] ok [15:07:56] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 67 minutes) [15:11:07] Coren what is managehome static_nfs [15:11:10] in that role options [15:11:49] That's for the (rare, not in use in Labs) cases where the NFS server is statically some other host. [15:12:11] ok I applied the class [15:12:38] Now add the syslog server in role::labs::tools, (manifests/role/labs.pp) [15:13:02] aude, ready for me to merge these patches? [15:14:13] how do I enable the new class as well? [15:14:15] Coren ^ [15:14:25] i don't see it in project specific groups [15:14:51] petan: You'll have to add it first in "Manage Puppet Groups" [15:14:55] that I did [15:15:07] I see it in puppet groups but not when I do configure instance [15:15:07] Oh, I see. [15:15:40] it's not there :'( [15:15:50] I see it. [15:16:09] * Coren wonders why you don't. [15:16:52] even if you configure the instance? [15:17:18] tools: [15:17:20] role::labs::tools::execnode [?] [15:17:21] role::labs::tools::shadow [?] [15:17:22] role::labs::tools::webproxy [?] [15:17:23] role::labs::tools::webserver [?] [15:17:24] that's all :/ [15:18:06] petan, what project/group/class are you talking about? [15:18:18] andrewbogott role::labs::tools::syslog [15:18:26] I need to apply this role [15:18:31] but it's not in wikitech interface [15:18:33] :/ [15:18:34] in what project? [15:18:36] tools [15:18:38] andrewbogott: tools [15:18:41] ok [15:18:42] * andrewbogott looks [15:18:50] petan: I see them all. I'm not sure why you wouldn't. [15:19:16] andrewbogott when you have time, add me to logmsgbot group in tools project, that bot is dead now I think [15:19:59] with some explanation how to restart it :> [15:20:11] !log tools Deleted no-longer-needed tools-exec-cg node (spun off to its own project) [15:20:20] yeah, I see it too. [15:20:22] Coren ok can you check them? [15:20:28] if you see it :P [15:20:48] petan: That'd break puppet: you haven't added the class to the role manifest yet. [15:20:56] ah [15:20:58] Now add the syslog server in role::labs::tools, (manifests/role/labs.pp) [15:21:01] ^^ [15:21:02] ok [15:21:29] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 81 minutes) [15:22:08] petan, I don't see a logmsgbot service group… can you tell me more about what you need? [15:24:09] andrewbogott it's called local-morebots [15:24:13] :o [15:24:18] morebots is dead [15:24:54] Coren I added it but no idea if it's correct :o [15:24:59] hm, it's supposed to survive netsplits now [15:25:04] anyway, petan, added. [15:25:09] ty [15:25:21] ok so I how to restart it? :P [15:25:42] it uses the grid. [15:25:48] So, qstat/qdel/qstart [15:25:55] aha ok [15:26:05] Um… and you can search the history for the jstart commands [15:26:12] no jsub? [15:26:15] k [15:26:23] I think jstart is a wrapper around jsub [15:26:54] petan, or you can just ask me to do it :) Probably best if you know how as well though [15:26:56] petan: Yeah, that's it. I really wish I knew why you didn't see all the classes, but I'll add it. [15:28:32] petan: It's added, a puppet run should apply everything. FYI, switching from gluster to labs generally requires a reboot after the fact too. [15:29:08] ok [15:29:12] jstart is basically a link to jsub that implies '-once', '-continuous' [15:29:21] andrewbogott it's here! :P [15:29:25] petan, anything in the logs indicating why it croaked? [15:29:37] !log tools Deleted no-longer-needed tools-exec-cg node (spun off to its own project) [15:29:39] Logged the message, Master [15:29:41] I will check, but I believe it was on splitted server which died [15:30:10] Hm, the bot in operations survived the netsplit. But maybe it had a 50/50 chance [15:30:15] I don't see anything there in .out :/ [15:30:24] only some debug logs but no... [15:30:30] petan: I'd expect errors would be in .err :-) [15:30:33] DEBUG:root:'labs-logbot' got '!log tools petrb: on -dev'; Attempting to log. [15:30:34] DEBUG:root:'labs-logbot' starting [15:30:37] .err is empty file [15:30:47] Huh [15:30:56] That bot isn't very verbose is it. :-) [15:31:02] likely not [15:31:32] oh wait [15:31:36] I am lying [15:31:46] .err is that file with debug info [15:31:48] .out is empty [15:34:55] Warning: There is 1 user waiting for shell: Florianschmidtwelzow (waiting 94 minutes) [15:51:08] hi andrewbogott [15:51:18] 'morning! [15:51:19] the patches work fine now and are ready [15:51:44] did you see my note about the $wgServer patch? [15:52:50] Yeah, but we don't actually want to make that change. "//" . $_SERVER["SERVER_NAME"] is more correct even if it overrides something set elsewhere in the config. [15:53:19] hmmmm [15:53:58] Hardcoding the value is more fragile. It breaks https and also potentially use of instance-proxy [15:54:50] * aude checks if overriding is a problem for us [15:57:22] ok, looks fine [15:57:25] they are the same [16:00:08] andrewbogott: now to figure out why we get out of memory errors on our test client [16:00:26] anyone have that issue on labs? [16:00:30] Is oom the cause of the timeout? Or is that something else? [16:00:46] no idea but simply when editing [16:01:06] oh, it work snow [16:01:09] works now [16:02:22] strange [16:07:53] aude, does that mean things are working? [16:08:02] yes [16:08:11] * aude assumes puppet and labs was having a bad day [16:31:32] anyone knows what the table "hashs" are for? [16:32:40] AzaToth: That'd depend what DB you're talking about, I expect. :-) [16:32:53] enwiki_p [16:33:38] AzaToth: IIRC, that's an obsolete part of what is now rev_sha1 [16:33:45] ok [16:34:25] on an related note, I assume the maths table is related to math formular in wikitext? [16:35:43] AzaToth: That seems like a reasonable assumption. :-) [16:39:55] Hi [16:45:11] Coren: going slowly forward... http://paste.debian.net/9053/ [16:47:20] You'd gain several ms if that second query was using revision_userindex btw [16:47:50] oh [16:49:33] never heard about [16:50:00] can't find it in http://www.mediawiki.org/wiki/Manual:Revision_table nor in http://www.mediawiki.org/wiki/Manual:Database_layout [16:50:39] or perhaps indexes isn't documented [16:51:12] oh, it's a table [16:51:44] Coren: any documentation regarding that table anywhere? [16:52:27] * AzaToth scratches head [16:52:36] look like a standard revision by desc [16:53:24] some explination would be nice to have [16:53:46] ±spelling [17:32:23] Warning: There is 1 user waiting for shell: Bgwhite (waiting 0 minutes) [17:36:01] Coren: ? [17:36:29] AzaToth: Sorry, was AFK for lunch. [17:36:52] AzaToth: No, it's not documented yet; it's an alternative view that elides the supressed rows in exchange for the user index to be working. [17:37:50] oh [17:37:53] (Strictly speaking, both are views; _userindex is just differently organized to let some indices through that would otherwise be made unavailable because of conditional nulls) [17:38:14] any reason why not only have that view? [17:38:46] AzaToth: Because that view makes it impossible to see revisions where usernames have been supressed but not the e/s or the edit itself. [17:39:09] AzaToth: Which isn't an issue where you have a where clause against username or userid (since those rows wouldn't have been returned anyways) [17:39:12] * AzaToth is confused [17:39:44] AzaToth: revision has all the rows, with some fields conditionally blanked if the edit was revision deleted or supressed. [17:40:26] but how would that prevent user index from working? [17:40:27] AzaToth: revision_userindex never has conditional blanking of user or user_text (which lets the index work) but does not return the rows where those columns would have been supressed at all (rather than return them with the fields set to null) [17:41:11] AzaToth: Because a column can't use the underlying index if it's created by an expression. if(column-is-blanked, null, column) as column [17:42:24] oh, didn't grasp what you meant by conditional blanking [17:42:37] you mean that a view is blanking them dynamically? [17:43:37] how will rev_parent_id reflect the blanking? [17:45:52] Warning: There is 1 user waiting for shell: Bgwhite (waiting 13 minutes) [17:53:37] Coren: tested using revision_userindex, and it generally took 4ms longer to execute than just using revision... [17:53:59] o_O. For just the second query? [17:54:04] yup [17:54:16] D, [2013-06-07T17:53:50.324454 #4233] DEBUG -- : Revision Load (64.7ms) SELECT `revision`.* FROM `revision` WHERE `revision`.`rev_user` = 128326 ORDER BY `revision`.`rev_id` DESC LIMIT 1 [17:54:42] ... that still uses revision. :-) [17:54:46] D, [2013-06-07T17:54:39.322176 #4754] DEBUG -- : Revision Load (68.4ms) SELECT `revision_userindex`.* FROM `revision_userindex` WHERE `revision_userindex`.`rev_user` = 128326 ORDER BY `revision_userindex`.`rev_id` DESC LIMIT 1 [17:54:52] Ah. [17:55:20] goes a bit up and down, but generally it's like above [17:55:27] Hm. I'm going to venture to guess that if the user you are getting the last revision for has edited recently, the overhead for the table scan is very light with limit 1. [17:55:49] Whereas if the user has not edited in a long while, the difference will be very large. [17:55:58] (and in _userindex's favor) [17:56:14] Using an index has a cost, but it tends to have a /much/ better worst case performance. [17:56:46] any good username to think of? [17:57:31] User:C [17:58:00] D, [2013-06-07T17:57:56.151461 #4754] DEBUG -- : Revision Load (27.6ms) SELECT `revision_userindex`.* FROM `revision_userindex` WHERE `revision_userindex`.`rev_user` = 4538 ORDER BY `revision_userindex`.`rev_id` DESC LIMIT 1 [17:58:32] ok, a big diff [17:58:43] for revision, it's still searching [17:58:48] only two edits, in 2006 [17:59:22] Warning: There is 1 user waiting for shell: Bgwhite (waiting 27 minutes) [17:59:32] AzaToth: Right. Bad worst-case behavior. (You probably want to kill that query, it'll take hours to complete I'm guessing) [17:59:51] heh ツ [18:00:00] AzaToth: The basic rule should be, "If you have a where clause on user or user_text", use revision_userindex. [18:00:07] Coren: I see [18:00:15] how was it about rev_parent_id? [18:00:28] does it point to void if revision is not there? [18:01:39] rev_parent_id is never elided conditionally. [18:03:07] so if a row is elided, then rev_parent_id for a child revision will point to something that doesn't exists in revision_userindex? [18:03:35] petan: around? join #wikipedia-bag please :) [18:04:16] AzaToth: Correct, although you can always get that revision from the revision table itself. [18:04:43] (In fact, if you're fetching a revision by id, there is no benifit to using revision_userindex in the first place) [18:12:55] Warning: There is 1 user waiting for shell: Bgwhite (waiting 40 minutes) [18:26:25] Warning: There are 2 users waiting for shell, displaying last 2: Bgwhite (waiting 54 minutes) Rcpenalosa (waiting 2 minutes) [19:36:03] I have a question about restoring data from the backups in /public/datasets. If I run all the sql files into my msql db I don't have enough tables. what gives? [19:41:25] manybubbles: not all the tables are replicated because they have private data [19:42:08] ledoktm: is there something I can do to restore a working wiki using those files? [19:43:19] not sure. [20:03:26] manybubbles: In general, creating the missing tables empty should give you a functional restore. [20:04:10] Coren: Thanks! That is what I was trying now. I just dumped testwiki, restored that to a new db, and am restoring one of the smaller dumps on top of that [20:07:05] <^demon> manybubbles: If you've already got a bare install, you can also import an xml dump via maintenance/importDump.php [20:08:14] ^demon: Is that better than the mysql? I saw that I could do that but those files were larger and I figured going bare metal would be faster. [20:08:21] so if I failed, I failed faster. [20:08:48] <^demon> Depends on the size of the dump I suppose :) [20:09:20] <^demon> There's also a tool called mwdumper that handles much larger dumps, but it's probably overkill for what you're doing right now. [20:09:49] <^demon> Coren: Having a nice package of mwdumper for labs projects working with dumps might be a nice thing to do. [20:10:31] Looks like I have to run update.php after the db restore method [20:10:32] ^demon: Indeed. [20:10:49] <^demon> manybubbles: You can't run update.php too often :) [20:12:44] Coren, scfc_de: http://tools.wmflabs.org/logs/ [20:13:36] hey guys. I want to run Parsoid on tools but It seems there is no way, Gabriel told Amir (the another Amir) that we can use Puppet but It must be installed on tool at first. Is it installed? [20:13:42] can i use that? [20:16:06] petan: ^ [20:16:10] hi [20:16:18] puppet is installed on labs :> [20:16:41] Amir1: what exactly do u need for it [20:16:45] I mean parsoid [20:17:05] for running Visual Editor [20:17:05] Coren: what do you think about the central logging, any feedback? [20:17:30] see this:http://blog.wikimedia.org/2013/05/30/test-features-in-a-right-to-left-language-environment/ [20:18:33] petan I don't know how to run Parsoid via Puppet [20:20:53] Amir1 neither I do :/ [20:21:07] Amir1 what is actually needed to run parsoid? [20:21:22] node server.js [20:21:28] I must command this [20:21:53] one min [20:21:57] ok and what happen if you execute that command? [20:22:55] it's probably better to run parsoid centrally somewhere [20:23:00] http://www.mediawiki.org/wiki/Parsoid#Getting_started [20:23:13] in tools, or maybe the parsoid team can host one for you guys [20:23:18] in their project [20:23:51] packaging has not been a top priority so far, but I'd really like to create a proper deb after the July release [20:23:53] Ryan_Lane: I talked to Roan he said It must be the host [20:24:07] The code points out to the local host of the host [20:24:25] parsoid is a service [20:24:30] it should be able to live anywehre [20:24:32] *anywhere [20:24:38] It must be ran in the host [20:24:49] gwicke: ^^ ? [20:24:56] it can, but it won't necessarily be able to speak to random hosts [20:25:20] +100 in tools, or maybe the parsoid team can host one for you guys [20:25:25] it still needs a config for each wiki it is supposed to talk to [20:25:33] gwicke: can you guys just make a parsoid node that's accessible for the rest of labs? [20:25:38] in your project? [20:25:38] wikipedias are preconfigured, but random wiki x behind a firewall will never be [20:25:48] Ryan_Lane: we have that already [20:25:48] dedicate an instance for it [20:25:53] http://parsoid.wmflabs.org [20:26:12] Amir1: any reason you can't use that? [20:26:20] gwicke: what's the private ip of that node? [20:26:23] the public IP won't work [20:26:31] err. the private hostname [20:26:41] spof [20:26:47] parsoid-spof ? [20:26:53] Ryan_Lane: I didn't test that [20:26:54] yes, IIRC- let me check [20:27:05] parsoid-spof.pmtpa.wmflabs [20:27:07] yep [20:27:09] Amir1: use parsoid-spof.pmtpa.wmflabs [20:27:26] gwicke: that's intended for others to use as well, right? [20:27:35] for testing, yes [20:27:39] cool [20:28:08] it always runs pretty much the latest code, so things can break occasionally [20:28:13] * Ryan_Lane nods [20:28:20] better for it to be the newest code [20:28:25] Ryan_Lane: Where I use this, Do I set up it in localsettings of parsoid [20:28:39] I'm not actually sure how the config works [20:28:46] gwicke: ^^ ? [20:30:07] Amir1: if you only want to test wikipedias, then those are already configured [20:30:23] gwicke: no [20:30:38] http://parsoid.wmflabs.org/fr/Foo [20:30:38] I want to set up visual editor in my wiki [20:30:48] http://tools.wmflabs.org/wikitest-rtl/w/index.php [20:30:56] ah, that requires a custom setup [20:31:15] we could add a config line for your wiki, but that clearly does not scale [20:31:54] Amir1: is this for testing, or actual production use? [20:31:56] gwicke: so how we set up this? [20:32:16] gwicke: It's for testing of RTL lang's folks [20:32:36] do you have a public API URL for your wiki? [20:32:52] yes [20:32:55] like other wikis [20:33:10] http://tools.wmflabs.org/wikitest-rtl/w/api.php [20:33:16] If that's waht you mean [20:33:20] *what [20:41:35] gwicke: is it possible? [21:00:45] gwicke: are you still there? [23:37:33] Coren, ? [23:37:49] Cyberpower678: ! [23:38:01] Coren, what's the status of S7? [23:38:16] petan: It'd be even more useful if you also used it to collect syslogs from all the other instances. [23:38:58] Cyberpower678: Beating my head repeatedly against the wall regarding the abomination that centralauth's perpetrates upon the database. [23:40:02] * Cyberpower678 gives Coren a pillow to beat his head on. [23:40:59] * Coren mumbles obscenities about having to try to efficiently conditionally redact columns based of a [bleep] [bleep] string comparison on an unindexed column. [23:48:49] Coren: Which table is it? [23:49:03] globaluser [23:52:57] Just say no [23:58:40] Coren, what's the status of researcher? [23:59:02] Cyberpower678: No update. As I've said earlier, that'll take a while.