[00:42:17] is there an sql table where I can look up namespace id's? I have the name of a namespace (i.e. "Wikipedia talk") and I need to look up its namespace number. I need to be able to do this for any language wiki [00:42:45] there was a db on toolserver for this, it doesn't seem to be included in the meta_p database, as far as I can tell [00:47:23] and any query to enwiki_p.namespace gives me this: [00:47:23] ERROR 1356 (HY000): View 'enwiki_p.namespaces' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them [00:47:40] That's a really outdated table [00:47:49] Why can't you just use the API? [00:48:13] I just wanted to see if there was a way to do it in SQL, since that would be a billion times faster than the API [00:48:19] there was a way on toolserver to do it in SQL [00:49:20] SELECT ns_id FROM toolserver.namespacename WHERE dbname="enwiki" AND ns_name="Template"; [00:51:57] Scottywong: That even worse than just outdated; the data is the underlying table is outright wrong. [00:52:38] ok [00:52:45] Scottywong: And mediawiki provides to way to keep such a table even vaguely in sync with the actual namespace (which may, and are, messed around with by extensions) [00:52:46] so API is the only way to get that, I would imagine [00:53:21] Scottywong: It's the only reliable way yeah. [00:53:29] ok thanks [00:54:00] Actually, that's been rather the outstanding issue with mediawiki in general (no in-database trace of runtime config) and there's probably a small-numbered bz about it. [00:55:36] so can we get someone to redesign mediawiki from the ground up for me? [00:56:10] Scottywong: You just volunteered. Good luck! [01:00:15] PyMediaWiki 1.0 [01:09:01] Hm.. Yuvi's HTTPS proxy isn't working? Getting Bad Gateway or timeouts [01:09:16] Since it takes over both http and https, http isn't working for those projects, either [01:09:24] https://cvn.wmflabs.org/ http://cvn.wmflabs.org/api.php [01:11:45] it's working for my project [01:11:54] is your backend properly working? [01:13:16] I think so.. http://ganglia.wmflabs.org/latest/?c=cvn&h=cvn-apache2&r=day [01:13:28] Trying a reboot [03:31:08] How would I go about getting user_email_authenticated unmasked? [03:36:24] Dispenser: well, given that the abusefilter has been leaking that for 3+ years now, I don't think it should be a big deal [03:36:32] Coren is the person to talk to though [03:54:08] unmaskedCoren: Can we user_email_authenticated? [03:54:39] Coren: Can user_email_authenticated be unmasked? [no RTL chars]* [03:59:23] why would you want that? [04:19:34] Cyberpower678: Hey, could you remove the German data protection compliance from xtools? (Or make me a maintainer?) [10:09:49] Coren: enwiki_p is back up to >11 hrs replag; the same tool may be holding a lock again? [10:12:51] !replag [10:12:57] :( [10:16:21] @replag [10:16:26] \O/ [10:17:10] ok, who's going to volunteer to write a replag tool for Tool Labs? [10:17:54] what is a replag tool [10:17:58] Hey there! We're building a PubSubHubbub extension for MediaWiki as part of our Bachelor's, but we need a WikiLabs instance for testing. Does anyone know what the status of our request is or why it hasn't been processed so far (https://wikitech.wikimedia.org/wiki/New_Project_Request/PubSubHubbub)? [10:23:35] Nikerabbit: a bot that checks and reports lag across all the servers [10:25:41] tsbot Betacommand: s1-rr-a: 5w 2d 8h 11m 51s [+1.00 s/s]; s1-rr-a-wd: error; s1-user-c: 4w 4d 1h 5m 25s [+1.00 s/s]; s1-user-wd: 9w 5d 21h 45m 34s [+1.00 s/s]; s2-user-c: error; s2-user-wd: 8w 1d 2h 36m 48s [+1.00 s/s]; s3-user-wd: 21w 3d 16h 3m 35s [+1.00 s/s]; s4-user-wd: 17w 2d 21h 20m 9s [+1.00 s/s] [10:25:43] tsbot Betacommand: s5-rr-a-wd: 9w 1d 17h 4m 24s [+1.00 s/s]; s5-user-c: 6w 1d 7h 45m 8s [+1.00 s/s]; s6-user: 9h 50m 31s [+0.02 s/s]; s6-user-wd: 17w 2d 21h 20m 24s [+1.00 s/s]; s7-user-wd: 17w 2d 21h 20m 31s [+1.00 s/s] [10:33:09] Betacommand: at least replag here hasn't gotten to _those_ levels yet! [10:35:39] russblau: getting a replag for a specific db? [10:37:04] zhuyifei1999: I meant what Betacommand just demonstrated (in the #wikimedia-toolserver channel, if you type @replag, the bot responds as shown) [11:06:14] s/s? [11:08:22] the user groups page is broken, VERRY VERRY evil FuzzyBot.... [11:09:01] this bot is unusable... [11:09:07] on meta... [11:09:37] Steinsplitter: don't blame the bot, it only does what is instructed [11:10:27] Nikerabbit: no.... it dos other things.... [11:11:54] Nikerabbit: https://meta.wikimedia.org/w/index.php?title=User_groups/de&diff=prev&oldid=6677275 i dos not hav approved this change... [11:13:26] believe me.... i have marked hunderts of page for translation :P [11:15:16] Steinsplitter: it's just applying changes for https://meta.wikimedia.org/w/index.php?title=User_groups&diff=6084314&oldid=5977495 [11:15:24] the translation has become outdated and needs updating [13:32:19] Lock broken on enwiki. [13:45:48] Coren: have a second to help me find out why a script wont run? [13:46:12] Betacommand: Sure. [13:46:30] http://tools.wmflabs.org/betacommand-dev/cgi-bin/replag [13:46:40] works fine via commandline [13:47:31] Is it running as a webservice or relying on the default apache? [13:48:02] its a standard python script [13:48:18] * Betacommand doesnt get too fancy [13:49:05] Coren: http://pastebin.com/y8j2xE0Q sample output [13:49:44] execve() for program "/data/project/betacommand-dev/cgi-bin/replag" failed: Permission denied [13:50:18] "Doh?" :-) [13:50:24] Forgot to make it executable. [13:51:10] Coren: I invoked chmod a+x replag [13:51:41] I just re-set chmod 771 [13:51:44] Ah, it's also not readable. a+rx is usually what you want. [13:52:02] (The interpreter needs to be able to read the script) [13:52:13] * Betacommand grumbles about bad instructions from Coren [13:52:15] And bam, it workies. [13:52:46] Not "bad instructions"; when I told you it needed to be made executable it already /was/ readable. :-) [13:53:38] Coren: we have a functioning replag tool now :P [13:56:30] Coren: have you thought about making the equivalent of the toolserver table for labs? [13:56:57] What table are you talking about? [13:57:41] Coren: take a look at the toolserver all of the sql servers have a database called toolserver with useful information [13:58:23] language, family, db name, server info, namespace information [13:59:18] There is meta_p.wiki with much of this; but namespace information was discussed at length and really shouldn't be in a database. [14:00:25] Coren: http://pastebin.com/RJvNrV2g [14:00:25] (There is a meta_p on every slice) [14:00:57] Coren: why shouldnt namespace stuff be in the database? it makes things so much easier at times [14:01:53] Betacommand: Because namespace are generated dynamically by the running wikis, and the only way to get that in the database would be to schedule API calls to stuff it there at interval -- it makes no sense for tools to not just do the API call themselves instead. [14:02:31] https://bugzilla.wikimedia.org/show_bug.cgi?id=48625 [14:03:00] Coren: potential to save a lot of api calls and enables namespace information in the results of sql queries [14:04:29] Betacommand: You don't save "a lot" of API calls, you save exactly one at most; but joins against a table like this would multiply working set size, double index lookups, and generally increase the load of the DB for what is, ultimately, a <50 element hash table that really belongs in the application. [14:05:03] The load increase isn't large, but it adds up and is completely unneeded. [14:05:46] Coren: https://gerrit.wikimedia.org/r/#/c/100377/ if you get a chance [14:06:43] (03CR) 10coren: [C: 032] ""I've been meaning to do this for some time"." [labs/toollabs] - 10https://gerrit.wikimedia.org/r/100377 (owner: 10Anomie) [14:06:54] (03CR) 10coren: [V: 032] ""I've been meaning to do this for some time"." [labs/toollabs] - 10https://gerrit.wikimedia.org/r/100377 (owner: 10Anomie) [14:10:05] anomie: Pushed. [14:10:32] \o/ [14:20:27] * hedonil pokes Coren gently about https://bugzilla.wikimedia.org/show_bug.cgi?id=55652 :-D [14:21:24] Heh. No poking necessary, I was doing a changeset with a few recently requested packages now. :-) [14:22:20] Coren: secretly I knew you were doing that [15:38:38] Cyberpower678: around? [15:38:47] Dispenser, yes [15:39:15] Could we remove the optin check for the edit counter? [15:39:20] Dispenser, no [15:39:30] why not? [15:39:43] Dispenser, because the global community wants it kept. [15:42:01] Dispenser, sorry [15:42:07] Are you referring to the Germans? It was a legal compliance issue. [15:42:18] No. [15:42:26] Consensus was to keep it. [15:42:33] Well I guess I'll guess have to fork it [15:42:42] Dispenser, why> [15:45:19] Because that was only implemented to comply with the EU data protection policy while we were on the Amsterdam-based Toolserver. I know because I help demonstrate to X that it was possibly to generate the same information via the API and Excel, but he didn't want the hassle. [15:46:03] Dispenser: bad idea. these settings are meant to bear in mind the privacy of hundreds of volunteers [15:46:06] Dispenser, it's no longer there for any compliance. It's there because people want it there. [15:46:19] Dispenser: it's about moral not legal [15:47:38] Dispenser, https://meta.wikimedia.org/wiki/Requests_for_comment/X!%27s_Edit_Counter [16:00:41] Let's see what they say after I'm done [16:00:58] Dispenser, let's see who says what? [16:05:27] The German. Their privacy law is crazy. Censoring history and criminals [16:05:49] Dispenser, you're not going to convince them. I'm a german myself. [16:06:02] German's can be quite stubborn. [16:06:06] As am I [16:06:23] I still find it immesely amusing to see such calls for privacy in a society where your neighbors start asking question about what you have to hide when you keep your curtains closed. (Yes, I've had that happen to *me*) [16:06:49] Admitedly, that was in Düsseldorf; perhaps the mores are different Berlin-side. :-) [16:07:28] Dispenser, you will not be able to override the links on their site. [16:07:36] They'll likely say no. [16:08:14] Though, I do agree with you. We shouldn't need an opt-in. If you can convince them, you are free to launch another RfC to have the opt requirement removed. [16:08:53] But optin is in place because of consensus. [16:09:01] Dispenser, ^ [16:10:58] WP:IGNOREALLRULES [16:11:21] Dispenser, doesn't apply. [16:12:57] Dispenser: Wearing my WMF staff hat, I have no comment. Wearing my "long time community member and functionnary" hat, however, I would advise against this sort of unilateral move; it'll just throw oil on what is a simmering dispute that needs a bit of diplomacy instead. [16:14:06] Coren: Can we unmask user_email_authenticated before stuff yells at me for hammer the abuse filter for it? And while we're at it create a view for user_properties? [16:15:11] Coren, catching up with Nik_'s project request, I note that I don't actually have a link to the list of pending requests. Is your SMWfu strong enought that you can tell me what the query would be? [16:15:17] + I will put it in the sidebar... [16:15:49] andrewbogott: Yeah, gimme a sec. [16:16:42] Dispenser: Have you seen [[meta:Talk:Privacy policy#Generation of editor profiles]]? Forking *now* to remove the opt-in would likely be particularly bad timing. [16:17:31] https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BCategory:New-20Project-20Requests-5D-5D-5B-5BIs-20Completed::No-5D-5D/-3FProject-20Name/-3FProject-20Justification/-3FModification-20date/format%3Dbroadtable/sort%3DModification-20date/order%3Dasc/headers%3Dshow/searchlabel%3DOutstanding-20Requests/default%3D(No-20outstanding-20requests)/offset%3D0 [16:17:54] And oy! I need to hack at this too. I forgot I used to rely on Ryan_Lane for those. :-P [16:24:21] Dispenser, what anomie just posted reinforces that you should NOT fork the edit counter and remove the opt. [16:25:21] Coren: wow, ok, thanks. [16:25:50] Coren: oh, can it filter for just pending requests? I think that's all of them [16:25:52] I /hope/ [16:26:10] Oh, no, I'm wrong, we're just way behind :( [16:26:26] andrewbogott: Sorry. That's "Is-Completed::No" [16:26:53] Yeah, so Ryan was actually doing stuff. Who knew? :-) [16:32:31] Coren: heh. you just wait :) [16:32:41] I thankfully automated a lot of shit [16:41:54] Ryan_Lane: Lots of these project requests are from six months ago. So I'm guessing either you didn't patrol the project request queue or you had a higher bar for project creation... [16:42:07] I had a higher bar [16:42:18] and I usually put something on the talk page [16:42:29] ok, I'll read. [16:42:35] we don't really have a good way to close those out [18:36:44] lbenedix: 9.5G for your tool? Seriously? [18:37:41] I don't know what wikidatawiki_fe is trying to do, but it's actually using less than 1G; putting your limit that high is a serious overallocation. [18:42:35] Coren: does sge have any way to track memory usage? [18:43:02] Betacommand: It keeps a tally of your current and peak useage, if that's what you mean. [18:43:23] Visible here https://tools.wmflabs.org/?status and also via qstat [18:44:48] Coren: is it possible to get that status list sortable via project? [18:47:13] Hm. Well, I never actually considered doing it, but it's a fairly simple PHP that just iterates over the XML output of qstat so it should be doable. Honestly, though, with the migration I have little free time for it. [18:47:35] At least until late Jan. [18:51:29] (03PS1) 10Aude: Report Capiunto changes to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/100607 [18:52:40] (03CR) 10Hoo man: [C: 032] Report Capiunto changes to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/100607 (owner: 10Aude) [18:55:29] Hey Coren, how can I unmask more database views to bring into parity with the TS? [18:56:44] Dispenser: For the most part, you cannot. Many things the TS made avaliable that should never have been. For some, we can okay redacted views to present either aggregate or specific data (like we did for archive); but that needs a use case and an okay for legal (which is generally fast and easy to get) [18:57:17] The general first step is: open a bugzilla explaining what data you need. I'll add legal to it at need. [18:57:47] * anomie notes that the Tool Labs website code is in git, repo labs/toollabs, if Betacommand or anyone else wants to submit patches [18:58:23] anomie: full url? [18:58:37] Would user_email_authenticated be fine since the abuse filter been leaking it? [18:59:34] hedonil: https://gerrit.wikimedia.org/r/#/admin/projects/labs/toollabs is the Gerrit page, https://git.wikimedia.org/tree/labs%2Ftoollabs.git if you just want to browse. [18:59:48] anomie: thx [19:02:38] Dispenser: That sounds more like a bugreport to fix the leak rather than solid grounds for opening it up to me. :-) You'd have to get the okay from Legal for that -- I'm not sure what their stance is on that particular column, though I don't expect it'd be considered very sensitive. [19:06:52] On the TS I've used to add a third state. Ex: User emailable: [Yes/Disabled/No]. We've also considered spamming those without emails address to assign one in case of account comprise. [19:07:51] heya labsies [19:08:07] anyone have a good solution for puppetizing passwords in labs? [19:08:33] ottomata: not really [19:08:43] the private repo isn't really private [19:09:33] aye, so [19:09:38] i'm helping dan puppetize wikimetrics in labs [19:09:58] and he's got config files that contain google auth secret keys, and mysql passwords for its own db and for the labsdb stuff [19:10:07] so, should I do something hacky? [19:10:17] let dan put those pws somewhere on his instances manually [19:10:25] and then have puppet be tricky about pulling them into the real config files? [19:10:26] well, you can put it in /data/project [19:10:37] which would separate it from the instance [19:10:55] hm, yeahhhh [19:10:55] or you could put it on the instance, and back it up to /data/project [19:11:18] well, we were hoping puppet could somehow do it, hm [19:11:33] i mean, that's fine to put it there [19:11:35] hm [19:11:40] one possibility is to extend the puppet config to include something local to the instance [19:11:41] for modules [19:11:50] and include it via the local module [19:11:54] hmmmm [19:11:56] local module... [19:12:10] oh [19:12:11] hm [19:12:25] hmm, yeah [19:13:22] hmm [19:13:22] or [19:13:22] hm [19:13:30] there is an empty passwords.pp file in ops/puppet [19:13:36] he could just edit that locally [19:13:48] and then that would work in production as well, if we use the same class names [19:13:49] i think [19:15:48] ottomata: Another possibility; make a .deb with the credential files and have it installed from a local repo. [19:15:51] Dispenser: the email information is available on wiki [19:16:14] i think its ok if there is some manual intervention here, but i want this puppetization to work in production one day [19:16:22] so using the passwords.pp file, if it works [19:16:24] would be ideal [19:16:31] because it work the same in both labs and prod [19:17:26] Betacommand: partially, it give user_email_authenticated ORed with up_property="disablemail" [19:17:50] s/ORed/ANDed/ [19:18:33] Dispenser: I mean via API http://en.wikipedia.org/w/api.php?action=query&list=users&ususers=Betacommand&usprop=groups|editcount|gender|emailable&format=jsonfm [19:24:42] (03CR) 10Aude: "jenkins doesn't submit here" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/100607 (owner: 10Aude) [19:31:12] Betacommand: That API query is "user_email_authenticated AND NOT disableemail" [20:02:40] Yup, u2815_p is a very intuitive database name [20:35:51] As long as the number is your userid... [20:37:47] What was wrong with u_dispenser_p [20:38:38] And its not really portable... [20:39:32] user databases are still a bad idea i guess. wasn't labs there to end that? [20:41:22] What was right with u_dispenser_p [20:42:46] It easily identified who it belonged to [20:44:58] easier to fix when hard coded... [20:47:56] Dispenser: Except that usernames aren't unique accross projects. [20:48:16] Well, for human users they are but not for service users. [20:48:38] And human users generally should not have databases. [20:49:03] That doesn't seem possible in Unix or MediaWiki [20:51:15] Coren: even then, tools_projectname_p would have been preferrable over u48397h48d9741_p [20:51:55] valhallasw: Except that mysql user names are hard limited at 16. [20:52:12] Oh. Right. [20:52:24] why does that matter? [20:52:33] valhallasw: That's why I went with UIDs, the original plan /was/ somelike like tools_servicegroup_p [20:53:10] giftpflanze: Because the only way to systematically allow database creations is to include the username as part of the database names. [20:53:37] weird system [20:53:37] giftpflanze: Ergo, database names could not be constructed from usernames. [20:53:45] giftpflanze: Mysql. Yeay. [20:54:54] anyone happen to know why the beta cluster would have difficulty writing a temp file it's downloading from an external url? [20:55:15] (Well, strictly speaking, it's not database /creation/ that needed the username but database /grants/; same difference in practice) [20:55:36] dan-nl: do you get fatal / some exception ? [20:55:52] dan-nl: Not enough information to help diagnose; do you know where it's trying to write the file? [20:55:53] Error writing temporary file [20:56:19] dan-nl: Maybe out of space? [20:56:25] UploadFromUrl->reallyFetchFile() [20:57:13] BTW, can we get EXPLAIN fucking fixed, I've been optimizing queries _blind_ 9 months [20:57:19] line 272 of UploadFromUrl.php [20:57:47] Dispenser: language... [20:58:04] Coren: how can i check the available disk space? [20:58:17] dan-nl: df [20:58:18] Dispenser: and to answer your question: iirc it's a mysql bug -- explain does not work on views where you do not have create (?) privileges [20:58:58] http://bugs.mysql.com/bug.php?id=64198 [20:59:16] valhallasw: Not quite; it doesn't allow explain on views where you don't also have select on the underlying table. According to Oracle, it's a security "feature" because otherwise you could "estimate the size of the underlying table" [20:59:28] doesn'only /dev/vda1 is at 84% the others look okay [21:00:00] dan-nl: Probably not that then. I'd need to know more about what is trying to write where to help. Anything in the logs at all? [21:00:34] valhallasw: Yes, that's completely insane. :-) [21:00:57] Coren: running a gwtoolset media file job … it attempts to download an image file by url [21:01:15] the /data/project/logs/runJobs.php [21:01:33] Coren: the SHOW VIEW priv they mention in the bug does not help? [21:01:50] dan-nl: Right, but the important question is where it's trying to put the file. [21:02:05] valhallasw: No, it's /also/ required in addition to SELECT on the underlying table. [21:02:17] I see. [21:02:24] not sure … looking into it [21:03:21] https://bugzilla.wikimedia.org/show_bug.cgi?id=48875 [21:04:14] In particular look at comment 7 where Brad Jorsch did some good research. [21:04:29] shouldn't be fixing that locally on labs trivial? [21:04:57] wfTempDir() [21:05:01] i think [21:05:55] giftpflanze: Your definition of "trivial" includes "maintaining a fork of mysql". I don't think we use the same meaning of "trivial". :-) [21:06:07] mm, ok :) [21:06:24] dan-nl: we have Ganglia in labs so you can look at disk space for all machine at http://ganglia.wmflabs.org/latest/?r=day&m=disk_free&s=by+name&c=deployment-prep&hmax_graphs=0 [21:06:37] dan-nl: though I am not sure which partition it is looking at :( [21:06:46] :) [21:07:23] dan-nl: maybe permission problems with w/images? [21:07:35] it worked earlier this evening [21:07:48] but yes, something may have changed [21:07:50] dan-nl: also get some exceptions such as Exception from line 191 of /data/project/apache/common-local/php-master/extensions/GWToolset/includes/Specials/SpecialGWToolset.php: Please contact a developer; they will need to address this issue before you can continue. GWToolset\SpecialGWToolset::setModuleAndHandler: No form handler was created. [21:08:09] trying to figure out where wfTempDir() resolves on the beta cluster [21:08:33] you can use eval.php : mwscript eval.php --wiki=commonswiki [21:08:49] hmm … that's a weird one [21:08:55] the exception [21:08:56] that give you a PHP prompt in the context of that wiki, so you can: print wfTempDir(); [21:09:17] the exceptions are on the shared dir : /data/project/logs/exception.log [21:11:21] !log deployment-prep used git fetch && git reset --hard on Flow extension. Just to be sure [21:11:25] Logged the message, Master [21:12:36] thanks hashar, those exceptions are from yesterday so those are not an issue now … looks like someone may have been testing the extension for security issues [21:13:24] hashar: where do i run that php eval command? [21:13:42] ah just tried it in my home dir [21:14:34] on deployment-bastion [21:14:35] k, so Coren as far as i can tell UploadFromUrl.php is trying to download the image file to /tmp and for some reason cannot write the file there ... [21:14:38] you should have mwscript [21:14:51] it is a wrapper around maintenance commands to load a wiki configuration [21:14:51] yes, just had to switch to my home dir [21:16:00] dan-nl: Well, presuming there is room in /tmp, the permissions there should allow writes so long as the target filename does not exist with another owner. Have you tried creating a file there see if all is kosher? [21:16:37] i'll try something else … one moment ... [21:17:55] is there a way to find out how many jobs are in the beta cluster job queue? [21:20:17] dan-nl: Use maintenance/showJobs.php [21:20:26] thanks [21:48:40] dan-nl: you could make the error message " Error writing temporary file." a bit more verbose [21:48:48] by showing the tmp file name maybe [21:49:21] the jobs should be logging in /data/project/logs/runJobs.log [21:49:31] and you have debug logs in cli.log (that is super spammy though) [21:49:35] ja, was just wondering how i might retrieve that … i'm passing control to uploadfilebyurl and it's throwing the fatal status but doesn't give me much info except for that mesage [21:50:09] the runJobs.log show some var_export() as well [21:50:31] some gwtoolset internal array of settings [21:51:17] atm i'm using a print_r of the params of the job [21:51:57] makes it easy to identify the record within the metadata file, but i have not yet figured out how to retrieve any deeper info from uploadfromurl.php [21:59:26] dan-nl: you could also use a wgDebugLogGroup for that [21:59:41] and log your messages with wfDebugLogGroup( 'gwtoolset', some message here ); [21:59:45] will be handy [21:59:58] k, sounds like something i'll need to look into [21:59:58] the group can be configured in some of the wmf-config/*-labs.php to point to some file [22:00:10] else they fallback to $wgDebuugLogFile which is either web.log or cli.log on beta [22:00:17] really handy [22:01:33] dan-nl: https://www.mediawiki.org/wiki/Manual:How_to_debug#Creating_custom_log_groups [22:01:52] well i am off for tonight [22:01:57] sorry:( [22:02:13] no worries … i need to go to bed soon as well [22:02:16] have a good night [22:02:26] you too [22:41:36] Coren: Once more with feeling: I'm going to try pulling an update from Gerrit to my labs instance, in the past this has caused 405 errors, you said you wanted to look at the logs and debug it next time it happened [22:41:47] If you need to set anything up for that, now is the time [22:44:55] Hm [22:47:26] hey Coren, i'm not sure how else i can troubleshoot this issue. i just ran a job again and get the same error without any help Error writing temporary file. the only script that uses that error is /includes/upload/UploadFromUrl.php reallyFetchFile() on line 278 [22:47:59] i ran a simple upload that worked yesterday without issue [22:55:37] 'kay, eff it, I'm going [22:57:20] No issues anyway [23:16:26] deployment-jobrunner08.pmtpa.wmflabs is out of disk space on the root partition [23:16:55] I'm guessing this is related to the testing of the GWToolset application [23:17:16] But I don't know where the junk files are yet. [23:17:43] !log deployment-prep deployment-jobrunner08.pmtpa.wmflabs is out of disk on / [23:17:47] Logged the message, Master [23:18:58] !log deployment-prep Freed 4.2G on deployment-jobrunner08.pmtpa.wmflab by deleting files in /tmp [23:19:02] Logged the message, Master [23:22:42] ha! no more temp write errors [23:23:09] well then... [23:23:20] that might be an issue in prod, too... what's the cause? [23:23:51] bd808: did you happent o ls -l the files and save that? [23:24:15] it sounds like the job runner ran out of space, but where i'm not sure [23:24:35] and i don't know what it needed the space for ... [23:24:40] yeah, we've just ran into some converters leaving behind tmp files in production, fillin gup /tmp [23:25:18] greg-g: I have the ll in scrollback. Where should I put it? [23:26:02] bd808: bug report so it doesn't get lost [23:26:06] what did you have to do bd808? [23:26:10] I have no idea if it's something to worry about [23:27:20] well, i can run another test using this 3000+ items later today and see if it triggers the issue again [23:28:00] from my pov, it was very confusing … kept getting a tmp write error but had no idea why …. df on beta cluster showed nothing that i knew to look for [23:29:00] dan-nl: It was local disk on deployment-jobrunner08.pmtpa.wmflabs [23:29:00] other than that, the delay i added to the job queue generation of media file jobs seems to be working for this 3000+ metadata file [23:29:32] how would i have known? i'm not that familiar with checking disk space with df [23:29:59] did you clear out the local disk's tmp dir? [23:30:03] dan-nl: No worries. I was just letting you know what ended up being broken [23:30:23] Yes. I did an `rm *` in /tmp on deployment-jobrunner08.pmtpa.wmflabs [23:30:40] [16:18] < bd808> !log deployment-prep Freed 4.2G on deployment-jobrunner08.pmtpa.wmflab by deleting files in /tmp [23:31:10] i guess i'm curious now about what filled up the local disk … ah, okay and then that obviously freed it up … and the job runner uses that tmp folder as well as uploadbyurl [23:32:21] dan-nl: Yeah that's the "scratch" disk for the downloaded files [23:32:58] It may be that under some error conditions the php code does not clean up the temporary files [23:33:03] k, my concern then is what filled it up and didn't remove its files … hopefully it wasn't one of our jobs ... [23:33:37] That is probably not the fault of your extension. More likely it's a bug in the downloader [23:33:43] But not sure [23:33:56] I'm filing a bug now [23:33:58] right … [23:36:02] would be good to see if we can narrow down a script that can reproduce the condition … i'll see if one of our test metadata sets might be triggering it … but will do that later today. need to get some sleep now … [23:36:22] is this anything to be conceded about? [23:36:23] ParsoidCacheUpdateJobOnEdit User:Selenium_user type=OnEdit t=126335 error=Failed connect to 10.2.2.29:80; Connection timed out [23:37:34] dan-nl: I don't think parsoid timeouts in beta are a new thing, but I'm not 100% sure [23:39:33] looks like an ill-configured parsoid extension [23:39:48] the port in particular looks suspicious [23:40:16] although- that should be the parsoid varnish [23:40:22] maybe it is actually the IP that is wrong [23:40:37] dan-nl ^^ [23:41:26] deployment-parsoidcache3 is 10.4.0.61 [23:42:14] gwicke: dan-nl is watching the runJobs log in beta pretty closely as he's testing his new GWToolset extension [23:42:19] gwicke thanks do you know how to correct it or if we should? [23:42:31] greg-g: bug filed https://bugzilla.wikimedia.org/show_bug.cgi?id=58299 [23:42:44] bd808: thanks [23:43:42] weird files [23:43:44] i don't even know yet what parsed is ... [23:44:31] dan-nl, $wgParsoidCacheServers in the PHP config needs to fixed [23:44:42] not sure where that lives for betalabs [23:45:21] default is $wgParsoidCacheServers = array( 'http://localhost' ); [23:45:45] correct should be $wgParsoidCacheServers = array( 'http://10.4.0.61' ); [23:46:10] found a setting in wmf-config/CommonSettings.php $wgParsoidCacheServers = array( 'http://10.2.2.29' ) [23:46:24] that should not apply to labs though [23:46:30] afaik at least [23:46:45] that is the prod service [23:46:45] do we need to add an entry in CommonSettings-labs.php with that ip you mentioned? [23:47:02] I guess so, yes [23:47:28] after the parsoid extension is included