[00:00:08] ok, sounds good then [02:55:43] 3Wikimedia Labs / 3tools: jsub -continuous not compatible with qalter -notify - 10https://bugzilla.wikimedia.org/65842 (10Philippe Elie) 3NEW p:3Unprio s:3normal a:3Marc A. Pelletier My job was unable to get SIGUSR2 when using jsub -continuous then do a qalter -notify, it works w/o -continuous. The p... [06:05:42] @seen scfc_de [06:05:47] @notify scfc_de [10:58:43] Possible to increase default /var partition? [11:42:00] (03PS1) 10Giuseppe Lavagetto: sync with prod. [labs/private] - 10https://gerrit.wikimedia.org/r/135758 [11:46:08] (03CR) 10Giuseppe Lavagetto: [C: 032 V: 032] sync with prod. [labs/private] - 10https://gerrit.wikimedia.org/r/135758 (owner: 10Giuseppe Lavagetto) [12:01:52] kart_: In Tools or on your own Labs instance? [12:18:54] scfc_de: re spam: they use different sender addresses in every different mail [12:19:30] I assume the reason that there [12:19:46] *there're only Chinese spams is that English spams are already blocked? [12:21:10] um, Coren: while working on this: http://tools.wmflabs.org/render-tests/limes/web/ : [12:21:38] Coren: after "webservice start", loading the page works once, then there's the error page [12:21:53] Coren: from error.log: [12:21:56] 2014-05-27 21:23:36: (server.c.1502) unlink failed for: /var/run/lighttpd/render-tests.pid 2 No such file or directory [12:21:56] 2014-05-27 21:23:36: (server.c.1512) server stopped by UID = 0 PID = 29072 [12:22:21] any idea what this means? [12:22:50] JohannesK_WMDE: That's not an issue when it dies, but when it started -- the previous run was aborted forcibly. This normally only happens if your lighttpd managed to hit the 4G limit [12:22:55] liangent: Probably not; I assume Chinese spammers are just working harder :-). Are the sender addresses non-existing (i. e. the domains)? I filed a bug to exclude incoming mail from addresses that have no DNS entry, that should keep some of that at bay. [12:23:25] Coren: OOM? [12:24:01] JohannesK_WMDE: Take a look at qacct for those jobs; it'll tell you how much memory it peaked at. [12:27:25] In other words, what does render-test do and is it possible that it consumes large amounts of memory? [12:31:49] Coren: not really; this is the limes map which was running under apache just fine [12:32:14] Coren: if it does, it's a bug. how do i find the job ID of the web service? [12:32:45] scfc_de: yeah they are. [12:33:30] JohannesK_WMDE: You don't have to; you can use the name "lighttpd-render-test". I just took a peek and: maxvmem 3.807G [12:34:39] So yeah, whatever it did hit the memory limit. Strictly speaking, it's a *virtual* memory limit so you might hit it if you mmap very large files. I've also seen some DB queries do it when the script naively reads and entire result set in memory at once. [12:37:32] Coren: first, how did you get that value? is "lighttpd-render-test" the job name (qacct -j), owner (-o), ...? [12:38:00] Job name; sorry. So "qacct -j lighttpd-render-tests" [12:38:23] (with the 's', like your tool name) [12:38:35] ah, ok right [12:40:30] Coren: i was running this under apache and never hit a memory limit. we use a DB of about 1000 entries, so that's probably not the reason... question is, how do i find out what changed :) [12:41:19] Apache never had any per-user limit. Not by coincidence, the entire webserver went down at regular interval when it went OOM. :-) [12:42:09] i don't think this script previously used 4G of ram. :) [12:42:50] What changed isn't (most likely) how much vmem your tool use but the fact that it's being killed when it uses a lot. But vmem is an odd metric: it not only measures how much actual core the process uses, but it includes the entire set of mapped libraries and mmaped files (because that's how much physical memory the process *could* use in the worst case) [12:44:01] In practice of course, because of on-demand paging and shared pages, the actual physical usage could only be a fraction of this. If you want to know exactly where the memory is going you'll need to: [12:45:24] hi. how can avoid a private file like /passwd.txt to be viewed from browser.. i changed its permissions to 600 but no luck. [12:45:34] (a) increase the limit high enough that your job doesn't get killed before you (b) examine the /proc/*/map of the process tree to see where the vmem is going. [12:46:15] rohit-dua: That's because the webserver runs as your user which always have read permission. Simply moving the file /outside/ of public_html (which is... public) will do the trick. [12:46:24] yes, i have an idea of the difference between vmem and rss. :) still, 4G is too much even for vmem. this has to be a bug, either in the script, or in the lighttpd config, or somewhere else. [12:46:56] Coren: uhm... yeah... which means *someone* with access to the web server will have to look at /proc, right? [12:47:34] JohannesK_WMDE: Well, given that most webservices run fine with a variety of languages it's unlikely to be lighttpd (though, of course, still possible) [12:47:42] Coren: thank you. but i need that file as it contains databse keys.. or i can link the script with parent folder outside public_html... [12:48:12] Coren: perhaps i misconfigured it. whatever. point is, i don't know and i don't have access to the web server ;) [12:48:23] JohannesK_WMDE: In which case *someone* means "anyone with a tools account". :-) But yeah, you have to do it from the web server. [12:48:49] JohannesK_WMDE: When you start the job, it tells you which box it ended up on in qstat. You can just ssh there from -login. [12:49:00] does everyone have access the the webserver VM now? ah... ok o.O [12:49:04] that's new [12:49:09] (This is generally useful for debugging in general since you can also do things like strace, etc) [12:49:26] yes, sounds very useful! [12:50:23] JohannesK_WMDE: Hmm. Maybe not actually. I just realized that they are infrastructure VM which normally doesn't allow it and grid nodes (which normally do). They should though. [12:50:34] If not, it's a bug. [12:51:42] Ah, look like it's working as expected. [12:52:16] Although there is the "this is infrastructure" motd rather than the "this is an execution host" motd [12:52:40] Coren: this should be on the help page! (if it isn't already) [13:10:18] Coren: i set the limit to 8G and cannot reproduce the problem at all :) the largest vmem i see for my user is python with ~800MB max, which is a bit much, but nowhere near 4G... [13:10:34] o_O [13:10:48] What does max_vmem on the running process report? [13:10:57] I mean, through qstat [13:11:13] i can't find it [13:11:14] qacct -j 1197949 [13:11:20] error: job id 1197949 not found [13:11:46] qstat, not qacct. The latter only has ended jobs. [13:12:03] usage 1: cpu=00:00:38, mem=5.12234 GBs, io=1.37701, vmem=2.301G, maxvmem=4.470G [13:12:13] ah that's why [13:12:19] Remember that usage is the sum of all the processes. [13:13:00] So yeah, your python hitting 800G is probably what pushes it over the limit... turns out 4G is a bit conservative for non-php services it seems. [13:13:23] Well, 800G is still nothing to sneeze at mind you. [13:13:41] Coren: o.O these are separate FCGI processes i think [13:13:56] They're still counted towards the job's memory usage. [13:14:05] and ... you mean 800M ;) [13:14:12] Coren: Heya! Who could be the right person to investigate mail notification lag (mchenry) for a Labs instance? Rather Labs instance, Labs, or mail part? See http://fab.wmflabs.org/T348 for details :-/ [13:14:26] Yes, yes I do. 800G wouldn't be large, it's be silly. :-) [13:15:18] andre__: Lemme look at the issue. [13:15:23] Coren, thanks [13:15:58] Coren: i see. well, doesn't vmem include the shared stuff? like all the shared libraries? the RSS for the separate processes is about 11M each, max. wouldn't that be a better measure than vmem? [13:18:00] JohannesK_WMDE: It would for the general case; but rlimits (which is what is being measured) only work for the worst-case scenario (nothing is shared); this is why the limits are calculated to overcommit the nodes in the right proportions. In practice, this means that the most reliable way to decide what limit you place on your jobs should be measured, then set to some reasonable cap above that. [13:19:13] JohannesK_WMDE: Also, gridengine actually *cannot* limit on RSS, mostly because the kernel doesn't offer an rlimit for that (nor, indeed, a getrusage) [13:20:17] The fundamental reason is rather deep into the kernel; process' RSS can vary widely depending on what *other* processes happen to be doing and general memory pressure whereas it's vmem is truly process-specific. [13:22:30] andre__: If I had to venture a guess, that's a problem with the mail setup on the phabricator instances and not related to mchenry -- but it might be a problem in the *default* mail handling setup on instances. [13:23:33] andre__: My exim4 skillz are limited - I'm more of a postfix dude - but I could take a look at it if you open a bz for me (sorry, the labs workflow is still on the old tools) :-) [13:24:15] Coren: sure, I can do that. I'm already very happy and thankful to receive any kind of feedback, or hints if I should be looking for some mail-related settings or such [13:25:07] andre__: I should say the first thing to look at would be the mail logs on the instance the email originates from; in particular, you want to look for emails being deffered. [13:25:27] makes sense [13:26:06] It might be something as silly as the email being throttled because of bad reverse DNS or somesuch; the logs should report what happened. [13:26:33] Also, it'd probably make sense for labs to be an early phabricator adopter. :-) [13:29:58] Coren: well, if there is enough free physical memory, RSS is a good measure of how much ram is actually used by the process... the python processes have an RSS of about 11M max, which is close to what i'd expect [13:30:12] 3Wikimedia Labs / 3tools: Replication for enwiki has stopped - 10https://bugzilla.wikimedia.org/64154 (10Sean Pringle) 5NEW>3RES/FIX [13:30:29] i think i will just let the webservice run with 8G vmem limit for now [13:31:04] JohannesK_WMDE: Sure, but if it runs a while and you notice, say, that it sits comfortably around 5G lowering that limit to 6G would be best. [13:34:04] Coren: ok. a somewhat nicer alternative would be to run the fcgi in separate threads, not processes. this would cause the ram to be shared... it would mean that it might run slightly slower because of python's GIL, but i'm waiting on the database most of the time anyway -- [13:34:34] i think i saw something about configuring thread-based fcgi stuff in flup. [13:35:05] JohannesK_WMDE: That's a reasonably nice optimization, but given that the actual footprint is relatively low I wouldn't worry overmuch about it. [13:37:01] ok [14:53:41] hey Cyberpower678 you there? [14:53:50] Kindof [14:55:21] do you know about https://cricket.orain.org/wiki/Main_Page? Cyberpower678 [14:57:34] Nothing much to see? Why? [14:58:34] See the infoboxes of this wiki's page. [14:58:40] here's an example https://cricket.orain.org/wiki/Indian_Premier_League [14:58:50] Can you kindly help me by fixing it? [14:59:15] Cyberpower678: ^ [15:00:15] What am I supposed to fix, exactly? [15:00:31] The missing templates? [15:01:03] do you see any difference? I mean in en.wiki the infoboxes are placed nicely in right, but at there.. [15:01:16] Hmmm.... [15:01:48] Fix everything that needs to be fixed. Actually I'm not a boss of wiki coding. [15:03:23] Sorry I can't help. I don't have Lua experience. [15:04:00] Pratyya, ask Mr. Stradivarius is a good person to ask. [15:05:23] where's he? I mean I'll find him where? [15:07:02] Oh wait. This was imported, so he's not registered on Orain. You can find him on the English Wikipedia. I'm also guessing that there's some CSS that needs to be done, I'm not all that experienced with it either. [15:54:43] 3Wikimedia Labs / 3deployment-prep (beta): mwdeploy user has shell /bin/bash in labs LDAP and /bin/false in production/Puppet - 10https://bugzilla.wikimedia.org/65591 (10Bryan Davis) [15:55:26] 3Wikimedia Labs / 3Infrastructure: l10nupdate gid should be 10002 to match production/Puppet - 10https://bugzilla.wikimedia.org/65588 (10Bryan Davis) [16:39:07] scfc_de: where's the bug about non-existing mail sender addresses? I can't find it [16:42:39] liangent: https://bugzilla.wikimedia.org/65629 [16:43:26] scfc_de: thanks [16:44:36] scfc_de: "The mails are in Chinese and contain spreadsheet attachments. User "user" seems to be Chinese, but my gut tells me that this is spam." this fits spams sent to me well [16:47:31] liangent: And only one sentence (proverb?) in the message body and "see attachment"? That's all Google Translate yielded for me. [16:49:11] 3Wikimedia Labs: Mail notifications from fab.wmflabs.org delivered only days later (or not at all?) - 10https://bugzilla.wikimedia.org/65861 (10Andre Klapper) p:5Unprio>3High [16:49:13] 3Wikimedia Labs: Mail notifications from fab.wmflabs.org delivered only days later (or not at all?) - 10https://bugzilla.wikimedia.org/65861 (10Andre Klapper) 3NEW p:3Unprio s:3major a:3None Copying from http://fab.wmflabs.org/T348 ; confirmed by other users Notification mail from http://fab.wmflabs.o... [16:49:24] scfc_de: most of spams I received are longer than "one sentence"? [16:50:06] liangent: I mean the message body; I didn't check the attachments. [16:52:12] scfc_de: in one kind of mails (with spreadsheets) the body is longer too [16:52:41] liangent: But they are from non-existing senders? [16:53:11] 3Wikimedia Labs: Mail notifications from fab.wmflabs.org delivered only days later (or not at all?) - 10https://bugzilla.wikimedia.org/65861#c1 (10Andre Klapper) and docs, for the records: https://secure.phabricator.com/book/phabricator/article/configuring_outbound_email/ [16:53:37] scfc_de: it seems they're using some random domains [16:53:55] so there is some chance that they got an existing one [16:54:27] and in some others (with an image), there's a short white proverb which is usually some greeting [16:54:41] 3Wikimedia Labs / 3deployment-prep (beta): Caching makes it impossible to test JS changes when logged out - 10https://bugzilla.wikimedia.org/63034#c4 (10Greg Grossmeier) p:5Unprio>3Normal s:5normal>3major Roan: Could you give us a tip on how to effectively/easily invalidate cached js/css on Beta Clus... [16:56:56] !log deployment-prep Updated scap to fd7e538 [16:56:58] Logged the message, Master [16:57:06] scfc_de: whois sender ip returns netname: CHINANET-JS. this is almost the same for all spams [17:00:11] 3Wikimedia Labs / 3tools: Harden mail server against incoming spam - 10https://bugzilla.wikimedia.org/65629#c1 (10Liangent) (In reply to Tim Landscheidt from comment #0) > The mails are in Chinese and contain spreadsheet attachments. User "user" > seems to be Chinese, but my gut tells me that this is spam.... [17:09:26] 3Wikimedia Labs / 3deployment-prep (beta): mwdeploy user has shell /bin/bash in labs LDAP and /bin/false in production/Puppet - 10https://bugzilla.wikimedia.org/65591 (10Greg Grossmeier) p:5Unprio>3Normal [17:35:41] 3Wikimedia Labs: Mail notifications from fab.wmflabs.org delivered only days later (or not at all?) - 10https://bugzilla.wikimedia.org/65861#c2 (10Andre Klapper) Another example from a few minutes ago: http://fab.wmflabs.org/T148#26 says "Via Web · Fri, May 23, 6:25 AM"; email received more than five days lat... [17:51:26] 3Wikimedia Labs: Mail notifications from fab.wmflabs.org delivered only days later (or not at all?) - 10https://bugzilla.wikimedia.org/65861#c3 (10Marc A. Pelletier) One useful bit of data that would be important to have: what instance is the email originally generated on? (I.e., is it the public facing one,... [17:53:39] !log deployment-prep Restarted logstash on deployment-logstash1; last event logged at 2014-05-28T12:11:37 [17:53:41] Logged the message, Master [18:38:33] I get: The specified resource does not exist. https://wikitech.wikimedia.org/w/index.php?title=Special:NovaInstance&action=configure&project=dumps&instanceid=374dd29f-3241-4b89-b2f3-aefe367691e2®ion=eqiad [18:39:00] I'm logged in in it, I'm fairly sure it does. [18:42:18] https://wikitech.wikimedia.org/wiki/Nova_Resource:I-0000013f.eqiad.wmflabs? [18:45:14] Nemo_bis: I'm just being told I'm not in the dumps project, myself. (Which I am not) [18:47:18] yes huh [18:56:02] mutante: here I am [18:56:16] hi all, can you help dmaggot maybe? [18:56:23] i could confirm he has an LDAP user [18:56:26] UID 2782 [18:56:34] but he does not have a home dir on bastion1 [18:56:40] and he says he had shell before [18:57:04] busy on somthing else but ..yea.. he is in LDAP ..so not sure why it doesnt create the /home [18:57:26] so it's about getting him to be able to ssh to bastion again [18:58:15] mutante, dMaggot, looking... [18:58:46] thanks! [18:59:26] dMaggot, please tell me what behavior you're seeing? [19:00:07] /home/dmaggot: ERROR: cannot open `/home/dmaggot' (No such file or directory) [19:00:12] homeDirectory: /home/dmaggot [19:00:21] Ah, so you can log in? [19:00:24] Just no homedir? [19:00:46] andrewbogott: I can't log in, I get the usual banner then Permission denied (publickey). [19:00:52] oh.. the home dir gets created after login, right [19:00:57] so then it's just Failed publickey for dmaggot [19:01:38] yep, in the logs it looks like a plain old key mismatch [19:02:06] arr yea, i forgot for a second that it's normal to not have a home in this case [19:02:14] because it creates that after you actually login [19:02:19] and the keys aren't stored there [19:02:41] it would all make sense if I had not logged in before [19:03:15] that fact aside, I tried deleting my keys and reuploading them in wikitech, no joy [19:07:20] andrewbogott: "error: key_read: key_from_blob "? [19:07:40] Don't trust blob [19:07:55] if it is just a matter of time for keys to propagate, then I'll wait.... [19:08:01] error: key_from_blor: can't read key type [19:08:14] error: buffer_get_string_ret: cannot extract length [19:09:11] that sounds like an invalid key on one end or the other. Possibly garbage added during cut-and-paste? [19:10:58] I'll try uploading it again... [19:11:43] when you paste, try removing any newlines that get inserted [19:11:51] this upload looks better (my previous attempt messed up the new lines) [19:12:14] still unable to log in [19:12:40] where are you uploading? [19:12:41] ^d: around? Got some question for the hhvm hourly build on labs/Jenkins. [19:12:53] also, just before deleting all of my keys and reuploading, I verified that the key on file was the same as my current key [19:13:02] just the hostname changed, which shouldn't be a problem [19:13:05] (I think) [19:13:12] ^d: it is build on a 8 cpu instance, I would like to free up some cpu and potentially build it on a 2 cpu (or even 1 cpu) node [19:13:20] the actual key file hasn't been updated for 20 minutes or so. So we may be waiting for something to update... [19:14:24] ^d: must be lunch time, mailing you :] [19:15:49] is this the right place to talk about projects? as in https://wikitech.wikimedia.org/wiki/Nova_Resource:Wlmjudging [19:16:21] dMaggot: ok, now it's updated. Better? [19:16:27] And, yes, this is probably the right place :) [19:16:45] andrewbogott: yes, I was able to log in, thanks [19:17:17] so, https://wikitech.wikimedia.org/wiki/Nova_Resource:Wlmjudging is the project Wiki Loves Monuments was using to host a number of installations of an evaluation tool [19:17:46] now another contest wants to use the tool and has requested me to do the installation and administration of the tool [19:18:12] I was thinking of using a similar setup, but separate from WLM to prevent a disaster [19:18:50] Could it just be a new instance in the same project? [19:20:34] I don't think so, is access at the level of projects? or instances? this group of people is in theory different from the group of people doing WLM, so permissions should probably be kept separate [19:21:02] (and I guess there's project access and then instance access, right?) [19:21:22] access is managed per project [19:21:32] so if it's a different set of people then a new project might be reasonable... [19:21:43] is that existing project defunct? Or will it be reused annually? [19:22:04] hi. what is the best way to store emails in DB that are to be grouped separately(each group receiving different email)... can i store emails separated by commas in DB? [19:23:19] andrewbogott: the situation of the current project is complicated: the international coordination has not set up the international contest, but several countries are holding their local contest [19:23:28] andrewbogott: most of them having their jury tool hosted in that project [19:23:38] Ah, ok. [19:23:38] andrewbogott: so not defunct [19:23:51] I can make you a new project then… what would you like me to call it? [19:23:53] rohit-dua: maybe use the mysql backend of dovecot? (IMAP server) [19:25:36] andrewbogott: the new contest is called Wikiviajes por Venezuela so a nice Project name I guess would be Wikiviajesve [19:25:45] ok, hang on... [19:25:50] andrewbogott: here's the website http://viajes.wikimedia.org.ve/ so that you won't think I'm making this up :P [19:25:57] :) [19:26:36] dMaggot: ok, should be all set [19:28:05] andrewbogott: ok, I see https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikiviajesve, but no instances and also, how can I add people to the project in the future? [19:28:21] dMaggot: you're a project admin, so you can create your own instances. [19:28:23] And also add people [19:28:30] andrewbogott: I'm not sure the tech team of the contest are all set up with Wikitech acounts etc, but I can figure that out later [19:28:48] andrewbogott: ok, let me try to set up something similar to what we had in WLM [19:28:49] mutante: i dont know what mysql backend of dovecot is for. Will it help me group the emails? [19:28:52] You just want 'manage instances' and/or 'manage project' in the sidebar. [19:31:31] andrewbogott: I go to Manage Instances, set the filter to wikiviajesve and I see an empty list [19:31:40] andrewbogott: I would expect a button saying "Create Instance" [19:31:43] dMaggot: Yeah, there should be one. [19:31:49] But it's missing for me too, which means something is broken :( [19:32:26] Coren, is https://wikitech.wikimedia.org/wiki/Special:NovaInstance broken for you too? [19:36:24] dMaggot: try reload? [19:39:05] andrewbogott: ok, got the link [19:39:16] my mistake, should work now :) [19:39:24] Coren: nm [19:42:09] andrewbogott: huh... now I want a web server and PHP and MySQL (I'm sorry, I'm new at this - I didn't configure the other project :s) [19:42:19] andrewbogott: if there's a docuumentation I should read just point me to that [19:43:22] The role::labs::lamp class will get you the basics. But really, if you want a duplicate of the existing WLM project, best to look and see what they did there, or contact whoever built those boxes [19:43:50] rohit-dua: yea, i think so, was just an idea though because you said "i want to organize mail in a db" and i thought "then why not use an IMAP server that already has a mysql backend".. maybe it's overkill though [19:44:10] andrewbogott: ok, makes sense [20:00:44] huh... I have a number of possibly unrelated questions: [20:01:14] my instance is dead, it says "failed" in Puppet Status, but the status is ACTIVE [20:02:01] I tried ssh'ing and pinging wikiviajesve-jurytool.eqiad.wmflabs and both just hang [20:02:43] then I tried pinging and ssh'ing to the instances of the WLM project and those DNSs apparently don't work (wlm-apache1.eqiad.wmflabs) [20:02:59] I remember I wasn't using eqiad before, it was like wlm-apache1.pmtpa.wmflabs or something like that [20:05:53] anyhow, I need to leave now, is anybody attending WikiConference USA that can provide support for these labs issues? [20:33:42] 3Tool Labs tools / 3[other]: Migrate https://toolserver.org/~erwin85/xcontribs.php to Tool Labs - 10https://bugzilla.wikimedia.org/60881#c1 (10PiRSquared17) 5NEW>3RES/FIX I ported it, and Cyberpower678 moved it to erwin85's directory. [21:36:35] hello. what is the maximum number of cron jobs (entries?)? [21:39:04] !ping [21:40:51] rohit-dua: There's no limit. What are you trying to do? [21:42:48] scfc_de: the bot(bub-tool) needs to visit IA periodically to check if the book upload is ready. this is done for each new upload. so i'm starting new cron-job for each upload, which will be deleted only when upload is ready. [21:43:45] you should probably use a redis queue or something for that instead, and have a persistant process that just listens for those [21:45:11] rohit-dua: Eh, cron jobs are typically run at some repeated fixed time. I don't quite understand how that relates to "upload" and how you start and delete cron jobs. [21:47:45] scfc_de: i need to check IA if the proofreading for the upload is ready . since proofreading on IA takes time(minutes). so each for each new upload, i need to visit its url and check if proofreading is ready.. [21:47:53] for this i thought of cron-jobs.... [21:48:58] i did'nt know of redis queue. what do they do? [21:49:20] rohit-dua: are you using python? [21:49:26] yes [21:49:52] So if someone does 100 uploads to IA, you add and delete 100 cron jobs? I would rather add them to a database and use one (continuous) job to see if they are ready. [21:50:19] rohit-dua: consider using something like http://www.celeryproject.org/. your life would be much simpler [21:51:20] YuviPanda: That's a big iron for a GSOC project :-). I'd rather take a simple DB. [21:51:33] oh, didn't realize this was a GSoC project :) [21:51:37] yeah, a simple db works too :) [21:54:01] so i should rather use a continuous job? which will check for all the uploads periodically.. [21:54:25] yeah, just have one job that checks for uploads not marked as done periodically, and check them, and then mark them as done when it is done [21:55:51] ok.. thank you. [22:23:26] 3Tool Labs tools / 3[other]: [grrrit-wm] literal \n's in IRC messages - 10https://bugzilla.wikimedia.org/57688#c1 (10PiRSquared17) It's not still doing this, is it? [22:23:35] ^ legoktm [23:39:50] so I'm trying to add to the mediawiki vagrant and I'm finding that the puppet modules are quite different from the regular mediawiki puppet setup. are they maintained completely separate or is the vagrant repo just out of date somehow? [23:40:15] what's the regular mediawiki puppet setup? [23:40:22] in operations/puppet you mean? [23:40:27] er yeah [23:41:13] the vagrant box was built without regard to how ops does things? [23:41:17] operations/puppet was a tangled mess when mediawiki-vagrant started so they are different codebases [23:41:26] ok [23:41:27] "without regard" is putting it a bit strongly, they now share a number of modules [23:41:36] and there's the objective of converging them [23:41:50] but there's still some work to be done there [23:42:09] ok that's kinda what I needed to know. so I should add what's missing and try to make them more consistent in the progress of what I am building? :) [23:42:24] yes, that's always very valuable (and appreciated) [23:42:32] cool [23:48:27] woo! :)