[00:25:36] LuaKT: you should `ssh -vvv bastion.wmflabs.org echo foo` and pastebin the output [00:25:49] also, what's in your .ssh/config ? [00:28:08] http://pastebin.com/NVtXQKtK heres the output from running that in cygwin [00:28:15] I don't have a .ssh/config fil [00:36:10] (03PS1) 10MarkTraceur: Move config into a default file and WMF files [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/98141 [00:36:35] Notably, I'll be fucking with grrrit-wm a little bit in the next hour or so [00:36:54] But not to worry, it's all for the greater good [00:38:52] marktraceur: feel free to self merge and stuff :) but only after testing! (in production!) [00:39:07] i mean, it's okay if the production instance fucks up, but git history! [00:40:50] Yup [00:41:58] LuaKT: 10:1 the problem is case significance in your shell account name. (Hint: it's case significant, and probably all lowercase) [00:43:16] * YuviPanda waves at Coren [00:43:28] not going to make you do anything :D Just a status update [00:43:47] Coren: Yeah I checked that earlier it's luakt lower case, but I get the "Permission Denied" before asked for a username when using jeremyb's command above [00:44:54] ... ssh doesn't ask for a username; it uses the username you have on your local box unless you specifiy one. [00:45:13] i.e.: ssh luakt@tools-login.wmflabs.org [00:45:42] I rewrite portgranter to work with redis and also use identd for auth [00:45:42] so that only things calling with a tool's service account submit routes for things under that tool's name [00:45:42] now documenting it, will submit a patch in a bit [00:45:42] Coren: I hope to get the entire thing done (and tested!) before Monday, so hopefully you can set aside a day next week to do the migration :) [00:45:59] Still permission denied [00:46:34] http://pastebin.com/4nGJRmcw [00:49:21] LuaKT: Your key isn't valid. How did you generate it? [00:49:38] In fact, the file that should contain your ssh key doesn't contain an ssh key at all. [00:50:03] It looks like the beginning of... a PEM certificate of some sort? [00:50:12] Or a PGP key. [00:51:10] Coren: ssh-keygen -t rsa [00:51:33] -----BEGIN RSA PRIVATE KEY----- first line of id_rsa [00:52:16] Yes, but it's not recognized as such, so I presumed it was something else that started with ----BEGIN :-) [00:52:35] debug3: Not a RSA1 key file /root/.ssh/id_rsa. [00:52:35] debug2: key_type_from_name: unknown key type '-----BEGIN' [00:53:09] Pro tip: don't ever use root to ssh somewhere else. For that matter, don't use root. :-) [00:53:32] Yeah it's a vm I was messing with [00:54:15] Try generating (and using) a DSA key; perhaps you have an odd bug. It should work, but at least this will help isolate the problem. [00:55:07] Coren: see http://toodadoo.wmflabs.org/lolrrit-wm/r/ - it is proxying dynamically to a simple python test server I'm running (temperoarily!) on tools-login [00:55:39] YuviPanda: Not gonna do this tonight, buddy. :-) [00:55:44] Coren: oh no [00:55:48] Coren: I'm not making you do it :D [00:55:50] Coren: just telling you [00:56:04] Coren: BECAUSE IT HAS BEEN A GHOST TOWN AND I HAVE NOT FOUND ANYBODY TO TELL IT TO! [00:56:06] :P [00:56:19] i'm just excited. nevermind the caps. [00:57:22] Coren: http://pastebin.com/6k6sWate Thanks for your time by the way [00:59:42] LuaKT: Well at least your client now recognizes your key, but you haven't added it to wikitech. :-) [01:00:10] Coren: I have added it, I presume it takes some time to update [01:00:48] It does, but it's normally fairly short. Let's wait a couple of minutes. [01:01:37] Coren: I'm off to sleep. Hopefully we can get this migrated next week :) [01:02:32] Still not seeing it. You see it if you look on your preference page on wikitech? Because right now I'm only seeing an RSA key. [01:03:57] Coren: I have 6 keys on there, you're only seeing 1 [01:04:00] ?* [01:04:20] Yep. [01:04:40] Possibly, the second one is broken in such a way that the copying mechanism stops processing them. Odd. [01:04:48] Try to replace them all with the new DSA key? [01:05:54] Coren: Ok done [01:06:18] Not seeing a change yet; let's wait a few minutes. [01:07:02] What's your Wikitech username? [01:07:43] LuaKT [01:09:13] What I find "Interesting" is that, as far as I can tell, your keys have not changed since Nov 25 [01:11:24] Coren: Thats the first key I added before getting approved. Going to try this 'Restore all default settings' [01:11:44] Oh, didn't reset keys [01:11:47] I don't think that affects keys. [01:11:52] Yeah, that. [01:12:28] andrewbogott_afk: Ping for when you're around; it looks like wikitech is no longer dumping key changes to /public -- or at least not for LuaKT [01:12:53] Coren: Thanks [01:12:56] I'm going to look into what's going on there in the meantime. That really should be automated. [01:16:32] Actually, I now see that it's been ~2 days since any key has been touched; that seems unlikely. [01:20:37] Coren: fyi, almost certain the validity of RSA key thing is red herring [01:21:10] jeremyb: Regardless of the kind of fish, the issue with the keys not updating is certainly a problem. :-) [01:21:21] yes, certainly [01:26:17] LuaKT: key="$HOME/.ssh/id_rsa"; ssh-keygen -lf "$key.pub"; ssh-keygen -yf "$key" > "$key.newpub"; ssh-keygen -lf "$key.newpub" [01:27:36] re red herring: it's just misleading log msgs because it first tries to load it as RSA1 and when that fails tells you a novel about why and then tries RSA2. [01:28:05] but something's also wrong because of """debug2: key: /root/.ssh/id_rsa ((nil))""" [01:31:52] jeremyb: I generated a new RSA key then ran that command http://pastebin.com/arQmdqgX [01:32:04] jeremyb: debug2: key: /root/.ssh/id_rsa (0x7fd938b2f4e0) instead of nil [01:35:50] LuaKT: so we lost the old key i guess [01:35:59] LuaKT: new one starts with approximately the same first line? [01:37:53] jeremyb: both the old and the new start with -----BEGIN RSA PRIVATE KEY----- then Proc-Type: 4,ENCRYPTED on second line [01:38:02] I'll run that command on the old one [01:39:14] LuaKT: if you can move old one back into place then maybe we can get you logged in before the other problem is fixed :-] [01:40:06] Coren: if you want mysql pointers (re replication) i have a few queries you may want to run. but leaving for sean is fine too [01:40:21] jeremyb: Ah, the "old" one isn't the one I added on the 25th that Coren can see, Sorry for the confusion [01:40:39] jeremyb: The one from the 25th I don't have anymore [01:40:58] c9:78:33:b2:bd:33:a1:c6:be:d0:f2:13:ce:7b:46:da [01:41:08] that's the one i see in the keys dir [01:42:14] jeremyb: I see why replication has stopped, but the problem is that the replication system is fairly new and not documented so without Sean's intervention it'd be rough to restart it. [01:43:05] jeremyb: Yeah that doesn't match the "old" one [01:43:10] Coren: yeah, restarting is a different question. i just meant to get "what is the latest error msg" [01:43:31] LuaKT: and in ldap: [01:43:31] 1024 2f:7c:a8:b2:fa:d5:3e:27:cb:3b:ba:a2:48:80:c1:1b root@luakt (DSA) [01:43:31] 2048 02:08:65:cc:7f:6e:47:d4:73:32:09:05:4e:c5:de:c3 root@luakt (RSA) [01:43:48] so that's 3 different keys [01:44:21] * jeremyb waits to see what coren cooks up with key syncing [01:44:31] jeremyb: Binlog replay failing because it's trying to delete a row that's not there in externallinks [01:44:44] jeremyb: yep I have those 2 keys [01:45:25] Coren: lovely. sean was adding a primary key to that table. idk what the progress is (or whether s5 is done yet or not) [01:45:49] Coren: (that primary key will help with syncing it up if the table gets somehow out of sync with master) [01:46:11] in fact it's about time for me to inquire about the status of that change again :) [01:46:14] jeremyb: It will. I'm thinking that may be the problem; el_id might be in prod but not in the sanitarium yet. [01:46:26] hrmmmm, interesting [01:46:49] So syncing of externallinks failed; thus replaying binlogs against it goes boom. [01:47:14] Fixing this without hosing the rest of the shard is beyond my confidence level. [01:48:02] yeah, some future update in the log could rely on externallinks being accurate [01:48:07] during replay [01:48:21] so you can't just skip it [04:07:09] Coren: Can you help with my lighttpd config? [05:30:21] * a930913 pets Ryan_Lane. [05:44:33] a930913: tried /lastlog but i can't quite find what you're looking for. what exactly are you trying to do? how do you want lighttpd to behave? [05:44:46] you should state that before you start poking people :) [06:19:32] good morning [06:19:43] the replication of Wikidata is still broken ... day 3 [08:24:06] I need help [08:24:06] Hi Ahmad_Sammour, just ask! There is no need to ask if you can ask [08:24:53] i want to put a limit to number of interwiki in SQL code. [08:25:52] e.g. x article has 5 or more intrwiki [08:30:46] Ahmad_Sammour: Use the limit clause [08:31:01] e.g. SELECT something FROM table WHERE condition=1 LIMIT 5; [08:31:41] limit of interwikis ? [08:32:01] * SigmaWP shrugs, I don't know what your query is [08:32:50] "limit" will make it so that you will receive at most 5 results from the query [08:33:29] ('SELECT CONCAT(":",page_title,"]]||", count(*)) FROM langlinks JOIN categorylinks ON ll_from = cl_from AND cl_to="'+PageTitle+'"JOIN page on ll_from = page_id WHERE page_namespace = 0 AND page_is_redirect = 0 AND NOT EXISTS (SELECT * FROM langlinks as t WHERE t.ll_lang="ar" and t.ll_from = langlinks.ll_from) GROUP BY ll_from ORDER BY count(*) DESC,page_title;') [08:33:47] this is the query. [08:34:29] I'm not good enough to know what that means [08:34:29] Sorry :( [08:35:05] It's OK. [12:01:04] jeremyb: Trying to add an access control header to the page. [14:10:40] Coren: dewiki_p.externallinks still seems to be missing some million entries (used to be around 10 million) [14:13:29] and rc is still 2 days behind and not increasing, if i see correctly. [14:36:21] giftpflanze: Slave status shows that the replica is caught up to to the santiarium. [14:38:06] Coren: pardon? [14:39:34] giftpflanze: Mostly thinking aloud. I'm trying to see what may be up. [14:44:16] Coren, what special page on wikitech will allow me to create a tools project. [14:44:47] Cyberpower678: That's though through the 'manage project' interface. [14:44:57] There is a 'add service group' link there. [14:45:25] Coren, can you provide a link. You guys seem to be moving it around. :/ [14:45:42] Cyberpower678: It has never moved since Labs existed. [14:46:32] It used to be located in Special:NovaProject. [14:46:41] Now it's not. [14:47:18] Coren: got anytime for some ssh keys debugging on labs ? :D [14:47:55] Ah, yes, the /destination/ did change. https://wikitech.wikimedia.org/wiki/Special:NovaServiceGroup [14:48:23] hashar: kinda-sorta. What's up? [14:48:58] the context is setting up Jenkins slaves in labs hosted on the same machine as the jenkins master [14:49:22] to do that, I created a jenkins-slave user in the 'integration' labs project. Generated ssh key pair in that user home directory and added the public key in wikitech [14:49:25] Coren, I seem to be the goto person for tool takeovers. :p [14:49:36] ldaplist -l passwd jenkins-slave : confirms the public key is there [14:50:00] but, when I connect locallhost ( sudo su - jenkins-slave … then … ssh localhost I get rejected) [14:50:39] Coren, how long does it take for the service group to be ready for use? [14:52:06] Coren: long story short, I suspect PAM prevents the authentication :/ /etc/security/access.conf is non obvious though [14:52:57] Hm. Is the key actually there? [14:53:47] It's not. [14:54:02] That looks suspiciously like yesterday's bug with a user. [14:54:20] It looks like whatever moves keys from LDAP to /public/keys is no longer working. [14:54:40] * hashar looks there [14:55:01] ahh [14:55:10] /public/keys/jenkins-slave does not exist :] [14:55:56] I can't find any documentation on how that process works, nor where it takes place. [14:56:25] will look at Openstackmanager extension [14:57:00] I don't think that's written to from the extension; I'm pretty sure there's a cron job somewhere to do that. [14:57:12] The extension, IIRC, just puts stuff in LDAP. [14:58:59] Coren, I'm no linux expert. How do I create a folder redirect? [14:59:39] In apache, you mean? [15:00:51] I'm creating replacement tools involving the use of the cgi-bin, but it was located in the public_html folder. Labs' cgi-bin is located outside of the public_html. [15:02:03] ... just move the contents to the actual cgi-bin. No redirection needed. [15:02:33] Fine. :| [15:02:34] Cyberpower678, the ~/cgi-bin folder is handled correctly by the web server, see https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Web_services [15:03:47] ireas: It's not handled "incorrectly", it's handled as a CGI directory (which is the intent). If you want things that aren't CGI, then one shouldn't put them in cgi-bin. :_0 [15:04:29] Coren, that’s what I wanted to say ^^ the web server interprets ~/cgi-bin correctly as the cgi-bin directory and "mounts" it to //cgi-bin [15:04:51] Ah. Well, that's clearly correct. :-) [15:05:02] ;) [15:14:31] Coren: I tried out by putting the key somewhere under /etc/ssh/userkeys/ , not much luck [15:14:32] :( [15:14:45] * hashar feels unproductive [15:22:47] hashar: Yeah, I've been trying to find documentation on how the keys move from LDAP to the shared directory with no luck. Perhaps we'll be lucky and either andrewbogott_afk or Ryan_lane will stop by during their shopping. [15:26:16] !newweb [15:26:16] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help/NewWeb [15:26:20] Cyberpower678: ^^ [15:27:22] Coren: will fill a bug and call it an end on my side :D [15:27:47] hashar: That also works. :-) [15:30:18] ahh [15:30:21] uid mismatch probably [15:30:49] jenkins-slave LDAP uid is 4098 [15:30:53] but id gives me 1001 [15:53:01] well opened https://bugzilla.wikimedia.org/show_bug.cgi?id=57751 [16:16:21] addshore, hihi [16:16:28] hihi [16:17:52] * YuviPanda waves at addshore [16:17:55] addshore, I'm working on making scottywong's tools available on labs. [16:18:02] I hit a hurdle. [16:18:09] which tools is that? :P [16:18:24] * addshore waves at YuviPanda :) (care to give me a minor puppet review? :)) [16:18:30] sure [16:18:32] link? [16:18:52] YuviPanda: https://gerrit.wikimedia.org/r/#/c/96552/ [16:19:12] dont need to +2 it or anything, just a sanity check as this is the first puppety thing i am writing :) [16:19:25] I can't +2, remember? :P [16:19:25] reading [16:19:35] hence I want to know if therre is anything I could do better or anything I have missed totally or might want to think about :) [16:19:41] empty init? [16:19:56] addshore, You've got a pm. [16:20:06] addshore: omg the entire thing uses tabs! [16:20:08] NOOOOOO! [16:20:09] :P [16:20:17] 4spaces plz [16:21:49] ahh yes, aude already told me that :P [16:21:55] Need to fix in a followup ;p [16:25:28] addshore: simple comments [16:25:41] [= [16:25:46] why 4 spaces? :P [16:25:48] why not 2? :P [16:26:04] addshore: consistency with python [16:26:04] :P [16:27:53] pah [16:27:55] thats silly :P [16:28:53] addshore: not sillier than tabs [16:29:09] a n d e x c e s s i v e s p a c i n g e v e r y w h e r e [16:30:29] :d [16:33:57] !log deployment-prep Added a ssh public key for jenkins-deploy user. Should only be accepted from 10.4.0.58 which is deployment-bastion.pmtpa.wmflabs [16:34:49] addshore: all new puppet code is 4spaces, btw. only ancient code has tabs [16:35:18] addshore: and I suggest putting a license header / copyright header on all files, mostly because, uh, puppet repo has... no licensing information :D [16:39:01] pash [16:39:06] ammending now ;p [16:42:09] taaaaaaaaaaaabs! [16:43:16] YuviPanda: question :P [16:43:23] addshore: sure? [16:43:33] * aude says no :) [16:43:38] require => [ Package['npm'], File['/home/wdbuilder']. Git::Clone[''clone_wikidatabuilder''] ], ? [16:43:47] yeah that works [16:43:48] does that work? :P [16:43:49] [= [16:43:57] although I think it is Git::clone [16:44:00] rather thatn Git::Clone [16:44:04] okay [= [16:44:08] that... has always been a little confusing to me :D [16:44:29] addshore: it is Git::Clolne [16:44:31] Clone [16:44:32] with a capital C [16:44:36] not small [16:44:51] addshore: put them in separate lines tho :) [16:44:59] heh, okay :P [16:46:04] YuviPanda: my ide complains about the lowercase c in clone :P [16:46:13] what IDE? [16:46:20] PHPStorm's puppet support? [16:46:23] phpstorm plugin :/ [16:46:25] xD [16:46:55] :C [16:46:56] err [16:46:56] :D [16:47:00] /me is sticking to Emacs for now [16:47:03] :P [16:48:20] /me considers sleeping early today [16:48:35] /me gives YuviPanda coffeee [16:48:36] YuviPanda: which editor are you using under emacs? [16:48:43] hashar: vim :D [16:48:48] +2 [16:49:15] hashar: I've Emacs with Evil mode, so I can get access to Emacs modes and stuff but don't have to suffer its stupid keybindings :D [16:49:35] hashar: also modal editing FTW! Emacs can pry my text objects from my cold... dead... hands [16:51:16] YuviPanda: ty, addressed your comments [= [16:51:20] addshore: :D [16:51:46] hashar: can git::userconfig be used to push to repos? ;p [16:51:52] I see your name in the example ;p [16:52:12] just a wrapper to generate a ~/.gitconfig file [16:52:29] get a puppet hash of section name => key => value [16:52:31] and expand it as: [16:52:35] ahh okay :) Still cleaner than what I am using ;p [16:52:39] [section name] [16:52:39] key = value [16:53:02] addshore, MariaDB [enwiki_p]> SELECT * FROM namespaces; [16:53:02] ERROR 1356 (HY000): View 'enwiki_p.namespaces' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them [16:53:24] YuviPanda: does your new labs proxy support redirecting sub directories to different instances ? [16:53:51] hashar: you can either have subdirectory redirect or domain redirect. not both... yet. [16:53:53] YuviPanda: I could use domain.labs to be put routed to an instance but domain.labs/directory/ to go to another ip:port [16:54:13] hashar: that feature isn't available on the proxy deployed from wikitech yet :( [16:54:27] hashar: but I built it for toollabs last week, so I can add it to the general proxy in a week or two [16:54:32] well I could do a subdirectory redirect of / :-D [16:54:43] YuviPanda, MariaDB [enwiki_p]> SELECT * FROM namespaces; [16:54:43] ERROR 1356 (HY000): View 'enwiki_p.namespaces' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them [16:54:45] you can have otherdomain.wmflabs.org :P [16:54:49] go to a different one [16:54:54] but not ideal [16:55:25] hashar: but this feature (URL based routing) should turn up sometime soon. On my radar for suer [16:55:26] *sure [16:55:31] nicee :-] [16:55:42] take your time. [16:55:54] hashar: my current labs focus is to move toollabs to the dynamic proxy :) [16:56:10] so we can finally get rid of goddamn CGI (or even fastcgi) [16:56:20] Helloooo? [16:56:22] and have websocket access, and also write webapps in whatever language we want [16:56:31] * YuviPanda flails around like an excited kid [16:56:36] and that feature will most probably let us get rid of most public IP assigned to labs [16:56:45] hashar: yup! [16:57:00] hashar: I expect a lot of the public IPs to be taken back before the eqiad migration [16:57:13] will also make the migration easier [16:57:19] petan, ping [16:57:25] * addshore has already given 1 back ;p [16:57:43] nice :D [16:57:51] Cyberpower678: pong [16:57:59] petan, MariaDB [enwiki_p]> SELECT * FROM namespaces; [16:57:59] ERROR 1356 (HY000): View 'enwiki_p.namespaces' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them [16:58:03] Why is that? [16:58:09] I have no idea [16:58:13] * aude hoards my ips [16:58:24] well, actually I do know what does it mean, but I don't know why it happens [16:58:38] petan, what does it mean? [16:58:38] either the view is invalid (the schema changed) or permissions changed? [16:58:57] * YuviPanda passes IP control laws and takes away aude's IPs [16:59:08] it means that the definition of view references some columns or tables that either don't exist or you can't read them [16:59:21] * addshore sets up a man in the middle attack and steals the ips that were going from aude to YuviPanda :O [16:59:23] Well, can that be fixed? [16:59:25] :) [16:59:30] like create view blah as select * from bleh; [16:59:45] if you didn't have access to bleh you would see this when you did select * from blah; [17:00:05] Cyberpower678: yes it can be fixed [17:00:19] petan, can you fix it? [17:00:21] no [17:00:28] I have no powers [17:00:33] Because Coren|AFK is away. [17:00:42] you need Coren or anyone with dba permissions [17:01:15] Cyberpower678: When was the last time you did a successful query on this table? [17:01:33] Never. [17:02:00] Cyberpower678: Ok Then, Never saw it before either [17:03:37] Cyberpower678: Doesn't exist :P Ghost table. [17:04:05] hedonil, oh yes it does. [17:04:16] show tables; will tell me that [17:04:44] Cyberpower678: It's no public view nowhere I've been so far [17:04:59] namespaces would be specific to labs [17:05:01] Cyberpower678: i think the namespaces tbale is labs specific, I dont see it in the mediawiki schema [17:05:04] e.g. toolserver had something like it [17:05:07] * aude thinks [17:05:09] Cyberpower678: file a bug or poke Coren|AFK ;p [17:05:16] has info about namespaces on all the wikis? [17:05:22] * aude clueless [17:05:58] aude may be correct. Toolerver does and I'm trying to access it as I migrate a tool from toolserver. [17:06:06] Cyberpower678: it doesnt appear to exist on all wiki dbs on labs [17:06:14] ie. i find it on enwiki but not on ptwiki or wikidata [17:06:15] sounds vaguely familiar [17:06:39] huh, it's only on enwiki? [17:06:47] Queried that info some time ago for all wikis, think it was in l10n_cache [17:06:48] well, its not on ptwiki :P [17:06:55] not on metawiki [17:07:04] or eswiki :P [17:07:40] https://bugzilla.wikimedia.org/show_bug.cgi?id=49167 [17:07:49] wontfix? [17:08:03] from magnus Thanks, I'll use the API instead. [17:08:16] use https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query&meta=siteinfo&format=json&siprop=namespaces%7Cnamespacealiases ;p [17:08:58] * YuviPanda considers setting up a way to use ansible on labs [17:09:17] addshore cheers :) [17:09:24] YuviPanda: yay [17:09:27] i like anisble :D [17:09:37] though im starting to come around to puppet a bit :P [17:09:40] addshore: I havne't fully checked it out yet, but... python :D [17:09:46] haha xD [17:09:52] addshore: what I really want to do is per-project easy-to-use puppet/ansible repositories [17:09:59] * aude likes puppet [17:10:00] addshore: so we don't put every goddamn thing into one repo :P [17:10:06] * YuviPanda likes puppet too [17:10:08] YuviPanda: see https://github.com/Orain/ansible-playbook [17:10:11] [= [17:10:22] addshore: indeed, have already checked it out :) [17:10:25] :D [17:10:26] * YuviPanda hangs out on #orain [17:11:09] * YuviPanda tries to not get distracted [17:11:13] proxy for toollabs first :D [17:13:02] Cyberpower678: If you just want some info try SELECT * FROM xxwiki_p.l10n_cache where lc_key='namespaceNames'; [17:13:56] Cyberpower678: If that's what you need. [17:15:18] Cyberpower678: blob in lc_value contains serialized array of namespaces [17:21:36] hedonil, python can't unserialize. [17:21:42] Sadly. [17:23:06] Cyberpower678: php can. [17:23:59] hedonil, I'm not in the mood to completely overhaul and rewrite Scottywong's tools [17:24:14] But yes, I like using PHP. I hate Python. [17:26:07] Cyberpower678: make a php script which takes a single param (serilized data) and returns in in some other format python can read? :P [17:26:12] then call it from the py script? xD [17:26:21] Cyberpower678 <3 [17:26:29] Cyberpower678: Well, It's pretty static. One time job - get it some lines of php-code, put it then on tools-db table - finally tell us, so we can use it too:P [17:26:35] I thought I am only person out here who doesn't like python [17:27:23] petan, no. I even went as far to rewrite SnotBot to PHP from Python. [17:27:34] cool [17:28:12] SnotBot is surely very happy [17:28:12] Python has left me unimpressed, and the "oh so awesome" Pywikipedia framework is total crap. Peachy does so much better. [17:28:24] xD [17:28:41] Or addshore's framework. [17:28:59] mwahahahaa [17:29:02] I don't really know what makes it a "framework" it looks to me like some bunch of complete bots that you can just launch yourself using your own bot account [17:29:11] * hedonil applauses to Cyberpower678's thoughts on python [17:29:22] :P [17:29:35] we seem to have a few people here that havnt really caught the python bug :P [17:29:37] me included :P [17:29:40] * YuviPanda uninstalls php from toollabs [17:29:54] * petan uninstalls Panda from Yuvi [17:30:04] To be honest, from what I experienced while I rewriting SnotBot, the new script is 25% shorter. [17:30:37] and 26084600% faster and 78% less memory expensive [17:30:52] and 99% easier to read [17:30:53] Pywikipedia has virtually 0 error handling. [17:31:05] python can handle errors? o.O [17:31:23] It doesn't re-attempt edits on a failed attempt. [17:31:45] It hit a 503 response once. [17:31:50] What happened? [17:31:55] Python output: [17:32:04] Server response: 503 [17:32:22] Uncaught exception on first half of program. Skipping. [17:32:32] it doesn't matter if exception description is useful in python. only what matters is if exception is cute [17:32:51] the indentation of produced error text matters. not the text. [17:33:29] it's funny how it can determine where the "half of program" is :D [17:33:45] I wondering how much cpu and memory that computation needs [17:34:14] Peachy handles the errors, re-attempts, and kills the bot after multiple failures, to prevent damage to the wiki. [17:34:41] It also doesn't explode when it receives a 503 response. [17:34:59] And it can simply do more [17:35:04] implementing a code that prevents damage to wiki could damage the look & feel of python code and thus make the source code less beautiful. you can't do this in python [17:35:31] petan, lol [17:35:59] * Cyberpower678 wonders how people like this programming language [17:37:16] petan, my philosophy, is better stable than beautiful. [17:37:27] Afterall, the end result is what counts. [17:37:31] that should be philosophy of every coder [17:37:33] :P [17:37:59] Well apparently, python programmers have that issue. [17:37:59] but who cares about cpu or memory these days, every computer has a lot of that [17:38:10] I do. [17:39:18] when I installed ubuntu on my mini-laptop (2gb of ram) just gnome 3 (written in java I think) with lot of python stuff all over (installed by default in ubuntu) were using more than 1.5gb of ram, I couldn't even launch internet browser [17:39:33] then I replaced it with lxde (written in c) and ram usage dropped from 1.5g to 82mb [17:39:33] I may have an i7, 8 core at 3.7 GHz, 16GB RAM, and 800GB SSD, but the less RAM and processor I use for one thing, the more I have for something else [17:41:30] aha I take it back, gnome 3 is partially written in c as well, but it's rather a huge interpreter of js [19:00:59] why is select rev_page from revision where rev_user_text='Liangent-bot' limit 1; so slow? [19:01:12] or is Special:Contributions using another query? [19:01:57] you know https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Tables_for_revision_or_logging_queries_involving_user_names_and_IDs ? [19:03:41] giftpflanze: nice thanks. I knew this and wrote it in my db wrapper (to always use revision_userindex), then forgot it. then now I'm writing a raw sql query [19:37:24] Change on 12mediawiki a page OAuth was modified, changed by Base link https://www.mediawiki.org/w/index.php?diff=832640 edit summary: hey i have spent 30minutes for tagging it, also this info should be represented normal way [19:40:16] Change on 12mediawiki a page OAuth was created, changed by Base link https://www.mediawiki.org/w/index.php?title=OAuth edit summary: from [[Special:PermanentLink/830908]] [19:40:36] Change on 12mediawiki a page OAuth was modified, changed by Base link https://www.mediawiki.org/w/index.php?diff=832736 edit summary: [20:02:14] is anyone working on the dewiki/wikidatawiki problems at tool labs? [20:05:58] apper: coren and possibly springle can deal with it [20:06:05] the data is out of sync [20:06:44] and they thought several times that they fixed it ... when they do, it will probably take some time before the synchronoisation is ok [20:07:21] they are both afk [20:07:27] okay, thanks [20:07:34] the database didn't change for days now [20:07:39] do we know what happened in the first place? [20:08:16] a mistake was made [20:09:26] it is not nice [20:09:49] whats the best place to follow whats happening? there are no infos at bugzilla [20:10:10] GerardM-: you ar bot operator? [20:10:20] sometimes [20:10:26] sometimes o_O [20:10:44] my bot has a lot of edits [20:10:52] but at the moment it is doing nothing [20:17:41] Where can I follow what's happening? The db is broken for over three days now and there is not much information at the moment... I would feel better if I would know where to follow the attempts to bring the database back ;) [20:19:23] apper: there was a thread on labs-l, IIRC? [20:19:41] apper: and it picked the worst time happen, it being a 4 day weekend [20:19:58] oh, right [20:20:08] oh right, long weekend in the US, I forgot [21:32:24] Coren|AFK, did something happen to pagecounts again? They don't seem to be mounted [21:52:06] petan, become scottytools [21:52:06] sudo: sorry, a password is required to run sudo [22:02:35] https://tools.wmflabs.org/xtools/bash gives strange error message [22:33:32] giftpflanze, where. I'm not seeing any. [22:34:05] petan, ping [22:49:20] i'm running an irc bot on tools labs. but for some reason the bot is "closed". i'm suspecting that the process is killed. what can i do to avoid that? [22:52:14] MRX: are you running it using qsub and the continous option? [22:52:46] nope [22:53:12] your bot can quit for several reasons, you can never be sure it will run forever, that' why you should make sure it restarts when it dies [22:53:41] using jsub will ensure that. [22:53:41] are you running it using qsub at all? [22:53:59] no. i wasn't using qsub [22:54:05] MRX, grrrr [22:54:26] So you're one of those users who ends up blowing up the login server. [22:54:36] no longer task should run at tool labs directly, all tasks should use qsub - and the good thing about this is, that you have a "continous" option, which assures, that the script is restarted, when it dies [22:55:05] oh, sorry [22:55:20] * MRX will use qsub from now [22:55:44] MRX, I hope so. :p If you need help sending a script to jsub/qsub, just ask. [22:56:26] the doc is at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Submitting.2C_managing_and_scheduling_jobs_on_the_grid [22:56:37] what is the difference between jsub and qsub? [22:56:43] Actually, here's an example of setting up a continuous job. [22:56:48] MRX, there isn't any. [22:57:16] just use jsub... it's a more powerful wrapper for qsub [22:58:22] just something like "jsub -N myircbot -continuous -mem 500m myircbot.sh" ... after "mem" you have to write how much memory you need, in this case 500 MB ;) [22:58:52] and instead of "myircbot.sh" you also can write something like "php myscript.php" and so on [22:59:20] cd && jsub -mem -cwd -continuous -N -o -e [22:59:41] MRX, ^ Full important usage of jsub [23:01:19] MRX, use qstat to view all active jobs. [23:01:52] MRX, use qdel -j to delete a certain job. Those are pretty much the important things you need to know. [23:02:12] the job is running now [23:02:17] but the bot is not connecting.. [23:02:39] MRX, what language is it written in? [23:02:43] php [23:03:02] Oh good. [23:03:19] you can debug using the output and error files... either you assign the paths for them directly or their name is like the name of the job plus .err and .out (normally in the tool's main directory) [23:03:31] Lemme look at it. [23:03:39] Cyberpower678: do you mean "Oh good" or "Oh god"? ;) [23:03:50] both the output and the error files are empty [23:03:51] apper, Oh good. [23:04:06] MRX, do you have a PHP debugger on your computer? [23:04:14] MRX: and the job is still running? [23:04:25] the job is running.. [23:04:25] MRX: and it connects if you run the script directly? [23:05:10] MRX, can I see the script. I've got an IDE and a webserver installed on my computer. I can probably see what's causing the issue. [23:05:42] https://github.com/MistrX/CoBot [23:06:34] Ugh. [23:06:44] I hate when people setup scripts like that. [23:08:01] heh [23:08:04] i coded it.. [23:08:06] MRX, from the terminal interface run the command cd php and watch it run. Pastebin the output. [23:08:33] Cyberpower678, i've been running it using "nohup php cobot.php" [23:08:36] and it worked [23:09:10] use qstat and pastebin the output. [23:10:36] MRX, you still there? [23:10:45] yes [23:10:48] just slow internet [23:11:06] http://pastebin.com/DwPguU6L [23:11:46] and i found this on the php5.err: http://pastebin.com/9mgAW6d5 [23:12:44] Can I see the command you submitted? [23:13:16] It's a typical php message if not enough memory is allocated [23:13:40] oh [23:13:51] hedonil, really? I've never seen that message before. [23:13:52] i'll try allocating more memory [23:14:10] MRX, don't use php5. Use php [23:14:23] i used php [23:14:33] Cyberpower678: was my first labs issue :) [23:14:35] local-wmk-tools@tools-login:~/cobot$ jsub -continuous php cobot.php [23:15:45] try -mem 500m or /more/ [23:16:21] MRX, use this to help [23:16:22] cd && jsub -mem -cwd -continuous -N -o -e [23:16:42] /var/spool/gridengine/execd/tools-exec-05/job_scripts/1711917: line 4: 1470 Segmentation fault /usr/bin/php5 cobot.php [23:17:34] hedonil, ^ got an explanation for that? I've never had these PHP issues before. [23:17:40] cd cobot && jsub -mem 255 -cwd -continuous -N cobot -o cobot.out -e cobot.err php cobot.php [23:17:51] MRX: I think it's a memory issue [23:17:55] use "-mem 500m" [23:18:02] or "-mem 750m" [23:18:24] it never used more than 10m on my tests.. [23:18:38] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Why_am_I_getting_errors_about_libgcc_s.so.1_must_be_installed_for_pthread_cancel_to_work.3F [23:18:48] MRX: php needs >250m per default [23:18:55] the shared memory is part of it... [23:18:58] oh [23:19:17] a simple test script "print "hello world";" needs about 270m [23:19:27] MRX, try using cd $HOME && jsub -mem 512m -cwd -continuous -N cobot -o $HOME/cobot.out -e cobot.err php cobot.php [23:19:33] so if your script needs 10m, just use "-mem 300m" [23:20:12] i could run it [23:20:14] thanks! [23:20:39] MRX, so it's working now? [23:21:03] yep :D [23:21:13] Yay. [23:21:18] I actually helped. :p [23:21:24] like hedonil this was my first labs issue, too ;) [23:21:40] I've never had that issue. :p [23:22:13] I just thought standard -mem values would suffice to run a hello world php script ;) [23:23:46] Cyberpower678: that's because you are special :P [23:24:23] hedonil, I guess. My bot even has it's own exec node on the labs grid. [23:24:28] :p [23:24:52] Cyberpower678: Celbrities:) [23:25:07] hedonil, :)