[03:43:09] Hello [03:43:57] I have submitted a patch and didn't get any positive or negative reviews [03:44:17] So I want to add reviewer to it [03:44:28] How can I add reviewers to it [03:44:33] ? [03:45:25] The screenshot of the access rights to my patch is resource://jid0-gxjllfbcoax0lcltedfrekqdqpi-at-jetpack/as-ff/data/edit.html [03:47:37] alisha: hi! do you have a link to your patch? [03:49:43] Yes I do [03:49:51] Here it is:- https://gerrit.wikimedia.org/r/#/c/167018/ [03:52:33] ok, the best person to review that is probably Nikerabbit (who is already listed as a reviewer!), he's typically online in #mediawiki-i18n [03:53:24] okay [03:53:39] So do I need to add any other reviewer? [03:54:48] probably not [03:55:47] Ok thank you [04:40:31] legoktm: But I didn't got any positive or negative reviews [04:40:51] What might be the reason behind it? [04:41:14] It might take a while for it to get reviewed, the people who are the best to review it might be busy with other things [04:42:32] Okay [10:31:53] Nemo_bis: hello [10:34:14] I want to start with robust programming [10:34:31] Please guide me How should I start? [13:51:02] how to enable CustomNavBlocks only for certain skins? [13:54:55] <^demon|away> Nemo_bis: That looks like a bug, ouch. [13:55:14] <^demon|away> One I thought I'd fixed some time ago. [15:28:10] What is the correct way to embed an XML code into a MediWiki page? Should I use ? Should I use :? Whatever? [15:28:23] ... I mean an XML sample code [15:29:43] There's a few ways [15:29:50] You can even have it syntax highlighted via an extension [15:31:09] Reedy: Please a sample code. (I can live without syntax highlight, it is on Wikiversity, I suppose the extension isn't installed there.) [15:31:22] porton: It is enabled there [15:31:40] Reedy: how to use it? [15:31:42] [15:32:02] Reedy: Can I also highlight Turtle code? [15:32:27] Doesn't look to be an option [15:32:32] Unless it's known as something else [15:34:14] Reedy: Which extension is used at Wikiversity? [15:34:22] !geshi [15:34:22] There are several extensions for syntax highlighting, see http://www.mediawiki.org/wiki/Category:Syntax_highlighting - the most popular one is at http://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi [15:34:29] The one at the end [15:35:18] Reedy: I'm confused, I don't understand which of these extensions is used at Wikiversity [15:35:39] http://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi [16:11:27] Hello everyone I am trying to get the following into a 3 column layout with each column being about 300px wide for a total of 900px across. Can I get some help? [16:11:32] https://dpaste.de/rcNn [16:16:03] HalpMeh, https://dpaste.de/ydtf ? [16:17:12] YES. Thank you. How would I go about making a 2nd row of 3 accounts? [16:18:16] HalpMeh, https://dpaste.de/r9Qs [16:20:17] Thank you very much, sir (pajz). This is exactly what I needed as an example to achieve what I wanted. I appreciate your help. [16:20:46] You're welcome. [16:33:47] pajz, I have another question. I have a 3x3 Grid of accounts set up now, but every row after the first seems to be centering towards the middle. How do I get each "cell" to align to the top? [16:38:31] HalpMeh, https://dpaste.de/AedN -- not quite sure if I understood the issue properly, though. [16:47:14] I think this explains the issue a bit better https://dpaste.de/RCU7 Your code fixed it up just fine though. Thanks again. [18:06:31] Hi all, I'm trying to figure out why our job queue doesn't seem to be running and turning on the wiki debug log for a few seconds shows two messages at a time: "[runJobs] Running 1 job(s) via '/index.php?...'" followed by "[runJobs] Failed to start cron API: received 'HTTP/1.1 411 Length Required" [18:06:55] Not sure what that means or if it's even part of or indicative of the problem. Any thoughts? [18:17:44] Hey all, summarizing my earlier question: Job queue isn't running, seeing messages in wiki debug log "[runJobs] Failed to start cron API: received 'HTTP/1.1 411 Length Required" -- any thoughts? [18:18:20] FWIW, recently upgraded to MW 1.23.2 [18:19:35] you're behind on security updates ;) [18:19:37] jcl: I ran into similar (different HTTP response) problems. I've just been running a cron job [18:28:22] Hey all, had a problem with my browser or something. If anyone responded further, please let me know. [18:28:32] I was responding to the person who said they just run a cron job. [18:31:08] Dumb question, how can I view the size of the job queue without just looking in the database's job table? [18:31:27] I remember seeing a way but I dont' remember where. [18:37:34] reedy: Hy, you've reviewed this patch (https://gerrit.wikimedia.org/r/#/c/164726/) but I'm not sure what you meant when you said that the message docs should be added for the new message. [18:38:08] tuxilina: https://www.mediawiki.org/wiki/Localisation#Message_documentation [18:39:16] Nemo_bis: Hello [18:39:25] oh, thanks [18:39:48] I want to know the file format support of JavaScript file format [18:39:50] tuxilina: I'm slightly surprised siebrand didn't pick up on it :) [18:44:25] Ok, I found the job queue size in the siteinfo statistics data via the api [18:46:23] So with MW 1.23, MW makes HTTP connections to itself to run jobs on a frequency based on $wgJobRunRate, but seems to be failing with an HTTP 411 code meaning it requires a Length header but it's not sending one to itself? [18:48:13] jcl: It seems it might not just be you with the issue, so it might be worth filing a bug (aswell) [18:49:17] Reedy: Ah ok, thanks. I'll do that. In the meantime, I guess setting up a cron job as suggested earlier, and on the MW Job queue doc page, may be a reasonable workaround for now. [18:50:55] PhpStorm is playing up [18:50:57] * Reedy grumbles [18:51:38] interesting, it's not the first time I see the HTTP connection for running jobs fails, but until today nobody came with a useful error message like thos [18:51:40] *this [18:52:20] Nemo_bis: ping [18:53:49] jcl: fyi, there's also maintenance/showJobs.php [18:55:28] jcl: what's your http server, by the way? nginx, apache...? [18:55:46] Bug submitted at https://bugzilla.wikimedia.org/show_bug.cgi?id=72274 [18:55:57] Reedy: Good to know, thanks. I'll take a look. [18:56:15] Vulpix: Apache 2.2.22 on Ubuntu [18:56:27] jcl: Might be worth adding that to the bug, along with your php version [18:56:37] Reedy: Will do, thank you. [18:57:27] I'm guessing it's something to do with it being a "special page", but then just pringint stuff [18:58:04] I tested that job queue thing on my server and it works fine [18:58:41] Added version info to bug. [18:59:01] apache 2.2.22 on openSuSE... strange [18:59:41] It's certainly possible it's something about my server. The upgrades were significant and I did do a full dist-upgrade following the MW upgrades, but other than the job queue issue, everything seems to be working fine. [19:00:00] jcl: noting 1.23.5 is out, so you might want to upgrade for security stuff [19:00:06] probably isn't going to fix your issue... [19:00:42] maybe ubuntu compiles apache with a special flag that makes that check for the length header [19:01:20] wonder if there's an easy way to test this [19:20:39] Reedy: Is there any way to have MW log the Special:RunJobs HTTP requests. They don't seem to show in the Apache access log. [19:20:51] And further, to dump the HTTP headers? [19:22:23] the debug log usually logs those, but I guess the error is intercepting the call, so PHP (and MediaWiki) isn't even called here [19:24:10] I don't know if this matters, but a few lines before the runJobs messages in the debug log, I also see "[warning] Did not find alias for special page 'ListTransclusions'. Perhaps no aliases are defined for it?' [19:24:35] Could that be related? [19:25:02] Shouldn't be [19:26:54] The error is from fsockopen [19:26:55] if ( !$sock ) { [19:26:55] wfDebugLog( 'runJobs', "Failed to start cron API (socket error $errno): $errstr\n" ); [19:27:43] I guess looking at the headers might help of the requests at issue [19:41:10] Reedy: Don't know if my previous message showed, I've been having trouble today with this irc site, but could this be at all related to possibly (not sure) having had jobs in the queue during the upgrade from MW 1.20 to 1.23? Just looking at bug 46934 and seeing some discussion along those lines with MW 1.22. [19:41:40] I did run several runJobs --maxjobs=1000 commands after the upgrade to clear it out, not sure if it got everything, though. [19:42:10] It's a HTTP error [19:42:30] print FormatJson::encode( $response, true ); [19:42:40] Yeah, I was grasping. Plus I've just checked the job queue and the oldest job is just after the upgrade, anyway. [19:42:44] The code doesn't seem to do anything towards trying to write a content length [19:43:39] Wouldn't everyone have the problem then, assuming they're not using a web server that is always injecting sucha header? [19:44:12] my apache doesn't complain about the missing Length: [19:50:08] How can I send a runJobs HTTP request from the command-line? It's required to be a POST and I'm not sure what data, if any, would need to be sent to the type of URLs shown in the wiki debug log. [19:50:31] Meant to say doing this through curl. [19:53:30] I just saw your post to the bug page. So the issue is between MW and the web server it runs on and whether it follows the http protocol in requiring content-length [19:55:37] yes [19:56:48] Vulpix: Ok, thank you and everyone for the help. Hopefully the discussion in the bug will yield a fix, unless there's something I can tweak in Apache, but for now I'll just set up a cron job. [19:57:13] although the RFC says Content-Length SHOULD be present, not MUST... [19:58:39] Vulpix: So adding an additional Content-Length header there should fix it? Then I'll give it a shot. [19:59:13] yes, it should fix it [19:59:19] thanks :) [20:00:46] Cool, thanks Poke! :) [22:06:42] Hi, I'm trying to use Special:Export, and am being asked to provide a "Add pages from category:" -- is there a catch-all for all pages in the Main namespace? [22:07:12] or for pages without a Category? Unfortunately this wiki is not very organized :/ [22:07:56] I don't think there is... [22:08:03] Have you got shell on the wiki? [22:09:13] sorry, is shell like admin access? I'm just a reviewer/contributor and trying to archive for my own archives... not to mirror/import the site elsewhere [22:09:32] shell means is access to the server [22:10:47] No, I don't think so [22:11:50] or would using a 3rd party software like wget be preferable for this offline archive? [22:12:43] You could use pywikipediabot to export all pages [22:13:00] *pywikibot is it called now.. [22:13:28] } elseif ( $request->getCheck( 'exportall' ) && $config->get( 'ExportAllowAll' ) ) { [22:14:37] poke hmm, it looks like pywikibot requires bot access? or can any user run it? [22:14:49] Any user can use it [22:15:10] You just need a bot account if you want to use write access without being throttled [22:17:55] I'm taking a look at pywikibot, it looks like it's somewhat complicated, required compiling with Python. Is there something for with a Windows GUI? heh [22:18:21] I tried HTTrack, but the wiki's robots.txt disallows it [22:18:44] You just need to use the API to build a list of articles [22:18:49] you can then shove that in the text box [22:19:01] But depending on how big the wiki is, they might not all work in one go [22:19:43] Is it possible that you give me a link to the wiki? [22:19:51] or do you want to keep it a secret? [22:20:07] yea sure, it's public: http://wiki.teamliquid.net/starcraft2 [22:20:44] something like 9000 pages, but also images, etc, so I don't know if those can exported also [22:21:13] Nope [22:21:25] You can do the file pages, but not the actual files using the exporter [22:21:37] pr3ch: Is it over 9000? [22:22:50] Reedy: it looks like 9000 pages, then 9000 images... there are a total of 40,000 pages of redirects/talk/etc that I don't need [22:24:20] I gave it a shot using HTTrack, which is a GUI-based mirror/scraping freeware app, but it didn't work well and after a few attempts, it blocked my IP... [22:25:06] Why are you trying to fork it? ;) [22:26:05] just trying to get some reading material for riding the bus :) plus I wrote some pages and want to save a copy for memories [22:28:38] Hello. I have a small question. Is it possible to know, how many lines are in table in Wiki page? I googled it, but could not found. [22:37:37] pr3ch: Still here? [22:38:05] poke`: yep [22:38:33] I see there's this https://github.com/WikiTeam/wikiteam software, maybe that'd work for my purposes [22:39:48] Can anybody help, is it possible to know, how many lines are in table in Wiki page? [22:40:55] agent0: you'd need to devise some way of counting them [22:41:13] Good job me, crashing my IRC client.. [22:41:26] pr3ch: Here is a list of all main space article names on your wiki: http://pastebin.com/HTJFruun [22:41:55] lol [22:42:33] I tried pasting the link in IRC and I accidentally still had that file content in my clipboard. Bad idea xD [22:43:32] poke`: great, thanks, could I trouble for its sister site too? http://wiki.teamliquid.net/starcraft [22:43:48] Reedy: And there is not any way of automatic counting them? [22:43:51] pr3ch: AutoWikiBrowser might hlep you here [22:43:58] agent0: No [22:45:32] Reedy: ok, thanks. [22:45:36] pr3ch: There you go: http://wiki.teamliquid.net/starcraft [22:45:45] Wrong clipboard again, wtf. [22:46:04] Reedy: thanks, I'm reading its WP page now [22:46:14] pr3ch: Here.. http://pastebin.com/rvfZcjxj [22:46:23] it's windows gui and has list building ;) [22:48:13] Reedy: great, it doesn't look like it requires a bot account? [22:48:17] poke`: much appreciated [22:48:45] I mean, every time I add a new line to table, I need to edit string udrer table, saying "totally: pages of category somecat are maintained" [22:49:03] It doesn't. A bot account is only needed when you want non-throttled access and hide your contributions.