[00:20:53] i'm using Extension:MultimediaViewer, but my images are coming through with "No description available", and it defaults to showing "View license" (when there's no actual license on the imagery). Where does the descirption come from? [00:21:09] (I would have assumed it came from the File: page itself, but i suspect it's... image caption?) [00:30:55] Morbus, I'd suggest asking in #wikimedia-multimedia (but it's 5:30pm, so people will be on their way out now/soon) [00:31:30] ((Pacific time, for the ppl at the office. I think some of the team is in the European timezone)) [00:31:39] no worries, thanks. [00:31:44] figured out how to change the "View license" text. [00:31:50] reading docs and looking at code for everythign else :) [00:36:48] tgr is a multimedia person and he's still here :) [00:38:26] Morbus: https://www.mediawiki.org/wiki/Multimedia/Media_Viewer/Template_compatibility might be helpful [00:39:02] * Morbus reads. [00:40:09] interesting. [00:41:03] Morbus: this relies on the CommonsMetadata extension, in case you are eunning your own wiki [00:41:09] aye, i am. [00:41:24] without Commons images, and without CommonsMetadata, thus why i'm not seeing anything :) [00:45:49] there used to be a place in eearlier MWs where i could see how many jobs were in the queue - where'd that go in 1.23? [00:45:57] it's now in the API [00:46:13] ah, i think it was in Special:Version or Special:Statistics before? [00:46:19] so, no page-viewable version of it now? [00:46:22] https://en.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=statistics [00:46:29] ah! [00:46:30] no, it was removed since the count is extremely unreliable [00:47:17] yeah, i was more interested if there were, or were not, jobs. [00:47:19] not exact count. [00:47:25] and i can see that in the api. [00:55:06] tgr: i have many super large images on my side, that i'm using with MultimediaViewer. [00:55:25] i'm seeing many of them timeout and MV saying "Could not load thumbnail data, could not load image from..." [00:55:35] (you can see some of these by paging through http://www.disobey.com/wiki/Category:Posters_and_covers_for_1990 [00:56:17] Morbus: probably imagemagick timing out / OOM-ing [00:56:44] although that would not affect the data, so maybe not [00:57:00] tgr: if i manually view the File: page and choose Expand from there, the thumb will genreate. [00:57:12] seems to work for me though [00:57:14] and then if i reload the Category: page and page through, the thumbnail is fine (having already been generated). [00:57:59] tgr: maybe try http://www.disobey.com/wiki/Category:Posters_and_covers_for_1984, which i haven't loaded any from. [00:58:07] so no chance of us stepping on thumbnail generating toes. [01:01:07] Morbus: I think this is a matter of using thumb.php for your thumbnail 404 handler [01:01:21] hrm. i read something about that. [01:01:27] where was it, where was it. [01:01:27] Damned if I remember how to do it though. Oh! https://www.mediawiki.org/wiki/Manual:Thumb.php#404_Handler [01:02:10] there's mention of it on https://www.mediawiki.org/wiki/Extension:MultimediaViewer too. [01:02:45] Request URL:http://www.disobey.com/w/files/thumb/5/5e/A_Nightmare_on_Elm_Street-1984-German-Poster-1.jpg/640px-A_Nightmare_on_Elm_Street-1984-German-Poster-1.jpg [01:02:48] Request Method:GET [01:02:50] Status Code:301 Moved Permanently [01:03:01] Location:http://www.disobey.com/wiki/Main_Page [01:03:31] looks like some webserver configuration weirdness [01:03:32] *nod* [01:04:14] That's a 404 handler gone wrong probably. tgr I see this when I set up a new image on my local wiki, too [01:04:46] (also, all, hi from I-90 somewhere in New York, the future is now) [01:04:52] I think what happens is [01:04:57] yeah, in my htaccess, if a URL doesnt' exist on the filesystem or as a directory, i redirect to the mainpage. [01:05:15] MediaViewer uses thumbnail guessing, fetches the URI before tge thumbnail would exist [01:05:38] server returns 301, which is technically not an error, so we do not fall back to the API [01:05:49] cos i also have a Drupal install on this site, which also does the "send any URL to a main index.php handler". [01:06:01] and obviously displaying the main page HTML as a JPEG does not work so well [01:06:14] heh, yeah. [01:06:19] * Morbus heads to the .htaccess to fiddle. [01:07:06] set $wgMediaViewerUseThumbnailGuessing = false; and you should be ok [01:07:16] I should really change that to be default [01:07:17] according to the Extension: page, that still might not work... [01:07:29] or is that no longer the case? had a June timestamp on it. [01:07:40] "This can still fail as of June 2014; a handler is recommended." [01:09:32] * Morbus sets it and fiddles. [01:11:05] not sure if that ever was the case, looks like a misunderstanding to me [01:11:13] https://bugzilla.wikimedia.org/show_bug.cgi?id=64554 has more details [01:11:20] yeah, i've got it false'd now and haven't been able to duplicate. seems to be working pretty well. [01:14:55] tgr: thanks. commented. [01:25:07] tgr: hrm... [01:25:22] looking at the site on an iphone, it seems there;s a "License information" string that i'm not seeing in Special:AllMessages [01:26:05] i'm using MobileFrontend too - not sure if that relates. [01:26:51] might be something parsed from a file description page [01:27:16] never tried to use it together with MobileFrontend though [01:27:43] yeah, it looks like it's MobileFrontend. [01:27:47] MobileFrontend/i18n/en.json: "mobile-frontend-media-license-link": "License information", [01:27:50] sorry for the noise. [01:28:43] in which case, i'm now unclear if the images are MF- or MV-served... i never tested an image click on mobile before installing MV :D [01:28:55] ah well. not a biggie [01:30:31] if you are in mobile mode, it should be the mobile viewer which is completely unrelated code [01:30:53] if you use the desktop view from a mobile device, you should see MediaViewer [01:37:29] tgr: yep, that's exactly what happens. [01:37:32] thanks. [02:05:50] Hello! Is there anyway to prevent a specific user group from viewing a single special page? [02:07:04] Or, even better, is there a way to check if a user is NOT in a specific group, so I can restrict access through the programming of the page itself? [02:08:09] https://dpaste.de/5hAt < I've been using something like that, though I'm not sure what I've done wrong precisely. [02:16:42] gr, they left. [07:38:00] hello [07:38:17] how can i remove index.php from the links in wikimedia software ? [08:14:11] is it possible to install VisualEditor on Mediawiki 1.22.5? [08:29:55] oh, found the link [11:15:17] ws2k3: for what purpose? [14:04:12] Hi. I'm new to open source development and thought to give mediawiki a try. Am I at the correct place? [14:04:55] insaynasasin: yes, and also #wikimedia-dev [14:05:38] Vulpix: can you help me with this? how do i start [14:07:57] !start | insaynasasin [14:07:57] insaynasasin: https://www.mediawiki.org/wiki/How_to_become_a_MediaWiki_hacker [14:12:54] I am following this tutorial for setting up the gerrit account - http://www.mediawiki.org/wiki/Gerrit/Tutorial [14:13:16] but i am having problems in setting up git review [14:14:14] what problems? [14:16:17] when i type the command git review -s after cloning the repository it gives the following error - Problems encountered installing commit-msg hook The following command failed with exit code 1 "scp gerrit.wikimedia.org:hooks/commit-msg .git/hooks/commit-msg" ----------------------- .git/hooks/commit-msg: No such file or directory [14:40:13] hi. i am new to open source. please help me and tell me how do i contribute to mediawiki [14:41:39] siddharth: your problem with git review seems to be the port number it's connecting to gerrit.wikimedia.org [14:41:56] how did you clone the repository? [14:43:01] Vulpix: i cloned the examples repository just as given in the tutorial - git clone https://gerrit.wikimedia.org/r/p/test/mediawiki/extensions/examples.git [14:43:08] hello all. would anyone care to share some documents about how to maintain large mediawiki deployments? [14:43:33] I need to automate as much as possible and make it easy to upgrade to new stable versions of mediawiki core and extensions. [14:43:55] my research thus far hasn't gotten me where I need to go. [14:58:55] i got to know about open source and that starting from mediawiki could be a good option. i am good in c and c++ and also know html css javascript and jquery. so what is it that i can do here? [14:59:06] Hi there siddharth! [14:59:16] Yeah, there's plenty to do with JavaScript and CSS around here [14:59:34] Let me put together a few options for you :) [15:01:17] marktraceur: i am all ears [15:01:25] siddharth: https://bugzilla.wikimedia.org/show_bug.cgi?id=44803 might work for you [15:01:34] The code is pretty much ready, just needs to get submitted to the right spot [15:02:05] oh, I reported that bug :P [15:02:10] ohk let me give it a try [15:02:29] :) [15:02:48] siddharth: If you need help finding a spot to put the code, we can probably guide you a bit, but it's probably helpful to try yourself first of course [15:03:28] marktraceur: i am on it [15:03:49] siddharth: Also I can give you editbugs so you can claim the bug, edit it, etc. [15:04:02] All I need is an email address that matches your bugzilla account [15:04:08] (and if you don't have one, a bugzilla account) [15:04:23] siddharth93singh@gmail.com [15:04:44] soryy... sidharth93singh@gmail.com [15:06:20] 's okay :) [15:06:55] siddharth: Done! You can now edit most basic bug fields. [15:08:12] how do i do it? the bug you just gave me? how do i view this function scrollEditBox? there must be some way to see that [15:10:10] let me explain that bug ;) [15:10:32] ohk [15:12:10] siddharth: edit a page, put a bunch of text on it so you have vertical scroll bar in the textarea. Move the scrollbar so the top lines of the text are outside of the view and place the cursor on a line of the visible ones. Hit preview. When the edit form is rendered again, the scriollbar of the textarea should be preserved at the same position it was before [15:12:39] use the normal code editor, not the visual editor [15:13:05] oh, wait [15:13:16] looks it has been fixed!! [15:13:25] Oh! [15:13:26] Well [15:13:32] at least works now on mediawiki.org [15:13:37] Then maybe siddharth's first official act as editbugs can be to close the bug :) [15:14:16] heh [15:14:48] so how do i do it then? [15:15:06] siddharth: Confirm it's working for you locally, then say so on the bug, and mark the bug as RESOLVED/FIXED [15:15:22] and please add the gerrit change ID [15:15:33] it was fixed with [15:16:12] marktraceur: i have to to that in the comments below. am i right? [15:16:36] Yeah, siddharth [15:16:37] Status: UNCONFIRMED NEW ASSIGNED PATCH_TO_REVIEW RESOLVED [15:16:47] rillke: Not sure if it's worthwhile to find that... [15:17:17] not sure either but if it' RESOVED/FIXED it should have [15:17:31] Sigh, /me looks [15:18:09] maybe 97c8120f995cb333fb86c82b7b9adbe0994faddb [15:18:33] marktraceur which git command did your run to find out? Or did you use blame or what? [15:18:44] I'm just searching the logs for "scroll" [15:18:52] The caveman version of git bisect [15:18:59] marktraceur: done! :-D [15:19:14] Cool [15:19:27] give me something more now please. something which will remain open for a while! :-P [15:19:38] siddharth: Oh, now I'm behind again! Curse your efficiency sir :) [15:20:10] is siddharth interested in JS? [15:20:21] Mostly yeah [15:20:34] I just filed a bug that should be simple to resolve [15:20:35] I'm not sure if I want to throw them at an extension [15:20:37] Oh cool beans [15:21:12] what about that: https://bugzilla.wikimedia.org/show_bug.cgi?id=67198 [15:22:46] lemme try again [15:25:32] new version released [15:25:45] \o/ [15:25:50] I mean, that was a bit ago [15:26:22] I've sometimes updated the topic after days of the release :D [15:30:09] We still have thousands wikis on 1.15, what unit is "days" [17:16:03] I want to delete local files on a wiki (1.22.6, php 5.4.4) who already exists on InstantCommons (it's enabled since a few hours, but before that we uploaded files from commons to the wiki to use them) [17:16:33] Is there an extension, script or api function to generate a list of those files? [17:38:43] SPF|Cloud: I can get you a list of image names from commons and you can cross reference that with a list from your wiki [17:39:25] Betacommand: that would be a huge file :P [17:40:12] Vulpix: not that bug [17:40:16] *big [17:40:16] okay, and how to check it with a list from my wiki? [17:40:29] SPF|Cloud: do you do any programming? [17:40:37] no [17:40:51] SPF|Cloud: one sec then [17:42:55] SPF|Cloud: can you run and save "select img_name from image;" to a file for me? [17:43:14] no I haven't got database access [17:43:26] and also no ftp [17:43:47] maybe you can access local files from the api [17:44:03] SPF|Cloud: is the wiki public [17:44:13] yes it is [17:44:33] and api is enabled so I can query things from the api if needed [17:44:48] SPF|Cloud: whats the URL? [17:45:48] wikikids.nl/api.php [17:57:21] SPF|Cloud: working on your list now [17:57:28] okay [17:57:29] thanks [17:59:18] SPF|Cloud: it will take a few minutes to process a 1GB text file [17:59:40] I understand it [18:01:29] Is there a way to pass {{PAGENAME}} as a parameter to a custom tag extension; i.e. [18:01:43] err... [18:10:11] retentiveboy: yes [18:10:37] retentiveboy: use {{#tag:foo|(tag content goes here)|bar={{PAGENAME}} }} [18:11:02] retentiveboy: or, rewrite the extension to parse its parameters as wikitext :) [18:11:35] Sp {{#tag}} is a built-in parser function that calls the tag? Slick! [18:12:56] yeah [18:12:58] Betacommand: and? [18:12:59] !parserfunctions [18:12:59] "Parser functions" are a way to extend the wiki syntax. ParserFunctions is an extension that provides the basic set of parser functions (you have to install it separately!). For help using parser functions, please see . For details about the extension, see . [18:13:30] MatmaRex: thx [18:13:36] retentiveboy: at the very bottom of https://www.mediawiki.org/wiki/Help:Magic_words :) [18:23:10] SPF|Cloud: had to restart something, Ill let you know when I have a list [18:23:20] okay [18:43:42] SPF|Cloud: do the files from commons have the same name on the local wiki? [18:44:14] soms files uploaded on the local wiki have the same name on commons [18:44:17] but not all [18:44:25] damit [18:45:09] SPF|Cloud: I can get you a list of those, but the rest would require a lot more work [18:45:50] Betacommand: it's okay if you only have a list of those with same name on commons [18:46:26] SPF|Cloud: thats that Im in the process of compiling, however it takes quite a bit of CPU [18:46:53] hum. if I can help with it that is okay [18:47:37] SPF|Cloud: not really it just take time to parse and process a 1.02GB text file [18:49:11] hopefully, it wasn't "so big" [18:50:04] Vulpix: its only ~250Mb compressed [18:50:42] I guess a MediaWiki script for that isn't feasible, since it doesn't have a local list of commons files [18:51:34] Vulpix: Ive got a python script processing it right now [18:51:55] It doesn't? How does MediaWiki check for e.g. reupload-shared then [18:52:26] Nemo_bis: That'd be a lot of files to keep a list for locally for commons [18:52:29] That would be instantly out of date [18:52:30] ;) [18:53:06] Nemo_bis: at upload it [18:53:33] thats a handful of requests compared to a 1GB list [18:56:27] Ok, let me rephrase: is there really no way to ask the API which files are dupes of remote repo? [18:58:23] Nemo_bis: you can do it on a file by file basis using SHA1 [19:00:11] Sure [19:00:27] You can also download the 30 TB I archived on archive.org and sha1sum them all for that matter [19:02:10] Nemo_bis: or just ask someone to run a database query for the SHA1's on both wikis and compare them [19:05:48] I guess the practical query here is to compare titles and not sha1. Get a bunch of titles (of images) on the local wiki and call commons api for title matches, in batches of 50. Then compare sha of both only if they exist [19:07:03] oh, 11540 files on that wiki... well, that could take a while [19:10:39] Vulpix: I cheated, I downloaded the page_title dump from commons and am use that [19:12:38] I'd rather upload both title lists on a mysql database and run a simple join :P [19:17:08] Vulpix: if you know python its probably faster to just use it [19:17:38] there shouldn't be anything faster than a database join [19:17:38] local = get_lines(pathtolistoflocalifles) [19:18:00] Vulpix: upload/download/import/processing time [19:18:30] for line in commonsfile [19:18:41] if line in local [19:18:47] we have a match [19:19:46] are you loading 1GB of text into memory? scary [19:20:29] well, that's not a problem if you have plenty of them, of course [19:20:40] Vulpix: No [19:20:59] the for line in commons file parses it one line at a time [19:22:17] that's better, but still will take a lot of time [19:24:15] if you load this in database, with an index on title, they'll be sorted once, and both streams will be read secuentially once to match [19:25:37] Vulpix: python does that fairly well itself [19:39:08] Betacommand: is it 1GB with or without comppression? [20:15:30] SPF|Cloud: without [20:16:08] okay [20:17:58] Does on anyone use SAML for MediaWiki authentication? [21:24:44] Does anyone know the page that is the default empty page? [21:25:43] "the page does not exist yet" has some non wanted wording that I would like to remove [21:56:43] Zolotkey: you can use ?uselang=qqx to figure out what message keys are being used [21:56:53] I believe it's MediaWiki:noarticletext though [21:57:04] ah I see