[01:17:02] how I could generate image thumbs with this script [01:17:04] https://commons.wikimedia.org/wiki/User:The_Photographer/common.js [01:17:57] This script work over the page https://commons.wikimedia.org/w/index.php?title=Special:Log [01:21:12] hi [01:24:36] hi lbertolotti [01:24:46] hi ThePhotographer [01:24:57] hi [01:25:19] are you doing this at your own wiki? [01:25:25] no [01:25:30] at commons? [01:25:33] yes [01:25:53] Well Commons is "my wiki" [01:25:55] do you know how to install scripts? [01:26:10] It is runing [01:26:22] ok [01:26:27] does it work? [01:26:32] however i need a function to get the url to generate the thumb [01:26:43] I have the filename but I dont have the thumb path [01:27:17] I tryng convert the Upload log list in a list of thumbs [01:27:32] ok one sec [01:28:00] Because this page have a filter by tag that the Upload List don't have [01:28:45] I'm getting an error trying to read wikipedia dump [01:29:57] ThePhotographer, https://www.mediawiki.org/wiki/API:Imageinfo look for "URL to an image scaled to this width" [01:30:25] something related to indexing [01:30:52] lbertolotti, say which dump you are reading and specifically what the error is. and what you are reading it with. [01:30:55] lbertolotti, i'm not familiar with dumps personally, but if you provide more details then someone else may be able to help you [01:31:30] my god no, I don't want kill a fly with a canon [01:31:34] lol [01:32:28] well I need to generate that error [01:34:18] the dump is [01:34:27] enwiki-20180401-pages-articles-multistream.xml.bz2 [01:35:26] Sveta: thanks, some day i will use api again [01:36:52] ThePhotographer, you could take a look at the other page (upload log) and right click each thumb to see their urls, i doubt they're human friendly in the way that you can generate them yourself [01:37:11] ThePhotographer, filing a bug against uploadlog so that it has a filter by tag [01:37:19] ThePhotographer, ... is another attractive option [01:37:55] lbertolotti, thanks for the dump name; knowing the exact full error message, and what you're reading it with, could be helpful [01:39:23] Sveta: where I could see this code? [01:39:35] Sveta: forget the filter by tag [01:39:49] The simple thin that I want to do is generate the thumb [01:40:45] ThePhotographer, right click the image, click 'view image' then you can see its url [01:41:21] i know that [01:41:36] I was thiking to find another user script with a similar code [01:41:40] Hi.. I’d like to submit a feature to Wikipedia [01:41:54] phabricator is in the way [01:41:55] shred: use phabricator [01:42:45] ThePhotographer, put this in a web search engine, [01:42:47] wiki javascript "new mw.api" api imageinfo [01:43:01] Sveta: I hage mediawiki api [01:43:09] to me it gives https://fr.wikipedia.org/wiki/MediaWiki:Gadget-CategoryDeluxe.js and https://en.wikipedia.org/wiki/MediaWiki:Gadget-formWizard-core.js [01:43:10] I allready used it [01:44:52] ThePhotographer, i guess you could go through the log for the entire page, collect the image file names, and pass the long list of their names to 'titles', and get a long list of thumbnails urls back, in just one api query [01:46:20] Thanks for your help, I preffer not use api because there are 500 call per page to it [01:46:27] what was used to make this? https://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits/1%E2%80%931000 i would like to do something like this on my wiki [01:46:33] is it pure templates or is there a plugin involved? [01:48:33] sveta, BzReader is giving me "Unhandled exception has occured in your application" [01:48:58] ThePhotographer, you can make one call and get 500 urls back [01:49:02] the index does not exist for enwiki-20108401-page-articles-m....... [01:50:07] Sveta: there are another way like convert wiki markup to html, there are some function to do that? [01:50:26] kevindank, greetings :-) it appears to be made by BernsteinBot, probably https://github.com/mzmcbride/database-reports/tree/master (linked from https://en.wikipedia.org/wiki/Wikipedia:Bots/Requests_for_approval/BernsteinBot) [01:50:45] kevindank, (got the bot nickname from history tab) [01:50:49] WikiToHTML([[Image:Terrasse_Dufferin_2.jpg|thumb|right|128px|]]) [01:51:03] ThePhotographer, a 'parse' api query is another option [01:51:24] ThePhotographer, but it'd return you 500 paragraphs, you'd then need to parse the html. parsing json output is easier than parsing html [01:51:39] lbertolotti, i see [01:51:48] Sveta: some example? [01:51:51] shred, what feature would you like to suggest? [01:52:40] this example is very bad implemented, the result, difficult to use https://fr.wikipedia.org/wiki/MediaWiki:Gadget-CategoryDeluxe.js [01:53:33] a back to top button for really long articles [01:54:13] I was reading something the other day and they had ‘go back’ anchor tags every few headings [01:54:59] shred, i think you may wish to try the javascript code from https://en.wikipedia.org/wiki/Wikipedia:Back_to_top in [[meta:Special:MyPage/global.js]] and see if it starts working. if not then i may be able to debug it and make it work for you. [01:58:23] ThePhotographer, the part that you really care about is http://dpaste.com/1NVQ4KZ i think [01:59:02] ThePhotographer, https://commons.wikimedia.org/wiki/MediaWiki:UploadHelper-fa.js getMediaInfo is a little better [01:59:33] right that looks like it adds a link back to the top to the header [01:59:34] s [01:59:41] ThePhotographer, https://commons.wikimedia.org/wiki/MediaWiki:Split!.js initUI and initUI.done earlier are also not terribly bad [01:59:59] shred, let's see if these links work and behave the way you expect [02:00:06] it’s just ugly.. [02:00:49] shred, if you link me to the thing that you were reading the other day, i may be able to modify the appearance of these links so that they look better [02:00:52] I made an opaque button that slides the page back up to the top [02:01:05] but they will remain links.. [02:01:29] why not just add a small little floating button to the side that follows you down the page? [02:01:39] like on most other sites.. [02:01:41] Sveta: would i need to create a bot in order to do this myself? is there another way to retrieve a list of top users based on number of contributions? [02:02:32] shred, hold on. (what's your wiki user name in case i don't finish it within several minutes?) [02:02:56] isoscelesq [02:03:09] i just signed up to phabricator [02:03:44] kevindank, do these users need to be active within the last 30 days? [02:05:00] Sveta: https://commons.wikimedia.org/wiki/User:The_Photographer/common.js [02:07:35] Sveta: does not matter [02:10:28] I figured there;d be a github account for wikipedia and I’d make a pull request - but going through the mediawiki.org/wiki/deveoper_hub it feels like im going in circles [02:10:44] kevindank, ok [02:10:51] shred, i'm working on it, give me a few minutes [02:11:29] sure, but what are you working on.. I already have a file, I’ll just give it to you [02:13:14] Sveta: it work [02:13:28] Sveta: a bit crazy but work [02:15:39] ThePhotographer, as long as it's one api query for 500 files it would be ok i think [02:15:46] shred, https://en.wikipedia.org/w/index.php?title=User:Gryllida/common.js [02:16:07] Sveta: it is only ONE call [02:16:09] shred, i'm a little tired of css though, i am about to put an image in [02:16:15] ThePhotographer, that sounds the best [02:16:47] Sveta: You could use my script try [02:18:07] Sveta: I will work on your upload log on commons [02:18:26] ThePhotographer, i'll do that in a bit, thanks! i've got it by the url you sent a bit earlier [02:18:41] that looks good I suppose.. how do I render it? [02:18:43] You username is...? [02:19:04] http://wm-bot.wmflabs.org/dump/%23mediawiki.htm [02:19:04] @info Sveta [02:19:14] why’d you just write it - I wrote the css/html and js for it in a file for you guys [02:19:21] Anyway, I will go to sleep, thanks again Sveta [02:21:20] The only problem is that the wikitext cant be big, I can only show 50 images per page [02:24:17] shred, it should be ready to use now, i've put a background image in and it is in the center [02:24:32] ThePhotographer, my wiki username is Gryllida [02:24:49] ThePhotographer, have a great night [02:27:02] thanks Sveta, so if its a small enough feature you guys just make it yourself - is it less trouble than accepting code submissions? [02:27:09] shred, https://en.wikipedia.org/wiki/User:Gryllida/js/goToTop-0.1.js [02:28:39] shred, i've just written it because i know that a user script can do it - it could be further added to mediawiki (as a part of the skin, needs a code submission) or a wiki (as a gadget, needs wiki administrator wish to do it, often in the form of community consensus) [02:29:43] that’s what I meant.. as part of the skin (which I assume means it makes it to the live site for everyone to use) [02:30:50] shred, you may wish to file a phabricator ticket for that and see what the skin developers want to do with it. as far as i know the decision would be made by the skin developers themselves [02:31:55] kevindank, i'm seeing you're still there, i'm not sure what api query could be used for this. a database query like what the bot does sounds like the best option [02:32:16] okay, thanks [02:32:38] kevindank, you can get access to a copy of the database without the passwords via wikimedia tools labs / toolforge or something similar, perhaps take a look at wikimedia wikitech for details how to get this kind of access [02:32:44] where do I submit one of those [02:32:56] kevindank, what is your wiki username in case i find something relevant several days or hours later? [02:33:12] shred, https://phabricator.wikimedia.org/maniphest/task/edit/form/1/ [02:36:10] thanks! Sveta [02:36:55] no worries [02:37:32] would this be a task in phabricator? [02:40:52] Sveta, unfortunately no wiki page. this is for a mediawiki build [02:43:00] shred, yes [02:43:12] shred, mediawiki skin developers use phabricator to keep track of their issues [02:43:51] it's time for me to head outdoors for several hours, but i'll hopefully be able to follow-up in several hours [02:46:04] cheers, thanks for all your help [02:46:18] no worries, feel free to share the link to the phabricator ticket once you've created it [06:22:46] hi [06:22:52] whats up [06:23:12] lets have a project and get rich [06:23:16] who is in [07:39:58] how i can filter by language in recent changes ? [07:50:26] on different wiki this done by &trailer=%2Fru [09:02:46] uhh a lot bugs... [09:11:54] cronolio, the trailer is a part of Extension:Translate [13:02:01] Hello, [13:02:01] One small Doubt [13:02:02] Why here https://www.mediawiki.org/wiki/HTMLForm#Example%20usage [13:02:02] arrays are being used in syntaxt $formDescriptor = [ ], [13:02:02] but here https://www.mediawiki.org/wiki/Manual:$wgExtensionCredits#Usage [13:02:03] arrays are being used in syntaxt $wgExtensionCredits['specialpage'][] = array( ); [13:02:37] And? [13:02:59] I mean the term "array" and its braces "( )" [13:02:59] array() is old syntax [13:03:06] [] is new syntax [13:03:57] Oh I saw array ( ) in many places in the wikis code , Can that be changed in to new syntaxt [13:04:31] I suspect most of them have been replaced [13:06:24] Reedy: Ohk Thanks, I am not sure I will check it out [13:14:54] Reedy: Then Can we edit Documentation here https://www.mediawiki.org/wiki/Manual:$wgExtensionCredits to use new syntaxt? [13:15:28] You can, yeah [13:15:55] Reedy: Thanks [13:23:31] Reedy: Now I modified can you please check it out https://www.mediawiki.org/wiki/Manual:$wgExtensionCredits [14:35:05] I cannot for the life of me, find any python wrappers for the MediaWiki API that allows to edit the pages.. [14:35:06] Anyone? [14:36:16] !apiclient [14:36:16] https://www.mediawiki.org/wiki/API:Client_code [14:36:24] https://www.mediawiki.org/wiki/API:Client_code#Python [14:58:52] hi, im getting 'anoneditwarning' when entering to edit mode of any page. anyone know how i can start figure out why? ty [14:59:19] niso: ...because you are not logged in? [15:00:14] lol, im logged in ^_^, just when pressing edit on any page it seems that dont (after continue to see the site im logged in again) [15:05:27] Is it possible to remove textcat? [15:05:43] From where? [15:06:52] /vendor/wikipedia/textcat [15:06:56] As far as I remember [15:07:38] Why that specifically? [15:07:50] The simple answer is just to not use the WMF vendor repo [15:08:10] It's only required for CirrusSearch [15:14:28] Because it uses up A LOT of space... And I take nightly backups of my Mediawiki site.. :/ [15:14:35] So I should properly make it more lightweight [15:15:00] [16:07:50] The simple answer is just to not use the WMF vendor repo [15:15:29] it's 38MB [15:15:31] that's not a lot of space [15:15:47] I'd imagine it'll compress easily [15:16:17] Granted, it's nearly50% of the entire vendor dir [15:16:42] Are you using cirrussearch? [15:17:14] Reedy, not as far as I know [15:17:53] I'm considering stripping as much as I can away.. Currently each backup takes up around 43 MB (not much), but it would be awesome to strip it further down [15:17:55] I'm considering stripping as much as I can away.. Currently each backup takes up around 43 MB (not much), but it would be awesome to strip it further [15:23:26] may i have little redirection about this stack trace: https://ybin.me/p/6f993a9654f28829#DrZ69yZgeU5v58NwY/VmMg8q30HW4e+PQapiPseK5ME= [15:32:02] the state which i appear as logged out seems to happen in users pages as well, any idea how to debug this? [15:43:56] [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 1): [15:43:56] query-m: REPLACE INTO `mw_objectcache` (keyname,value,exptime) VALUES ('X') [15:44:00] Wonder why it expects no writes... [15:51:45] GET request? [15:52:00] hmm, I think there was a bug for this [15:52:40] https://phabricator.wikimedia.org/T154424 [15:59:23] im working on 1.30, that looks like merged in branch 1.29 [16:01:20] the bug is still open, it hasn't been fixed yet [16:01:40] niso: I would suggest installing the apcu PHP extension, and setting $wgMainCacheType = CACHE_ACCEL [18:23:10] hi [18:23:21] i have a question [18:23:21] Hi paula, just ask! There is no need to ask if you can ask, if you already asked the question please wait for someone to respond [18:25:29] When one publishes a URL in Wikidata, the copyright becomes public to anyone who wants to use the material that is included in the URL. That is, the moral and patrimonial rights of the author are made public? [18:27:12] look at the bottom of the page for the link to the license. That explains in detail what you are and are not allowed to do with the content [18:27:36] if it's in the main or property namespace, it's CC0 [18:27:47] otherwise it's CC-BY-SA [18:28:22] also, only the material *published on wikidata* is licensed that way [18:29:07] if you publish a link, you are saying that you are releasing copyrights to that link itself (although I'm pretty sure that a URL cannot be copyrighted to begin with, so likely a non-issue). Any content on the linked page has its own copyrights [18:29:55] that means that publishing the url in wikidata does not make the content licensed in that way [18:30:07] no [18:30:38] ok, i underestand! [18:30:45] thanks for yur help! [21:55:42] Querying the MediaWiki API, is there some parameter to exclude subpages? Like I want "Extension:Foo" in the results but not "Extension:Foo/Bar"? [21:56:09] Which query? [21:58:13] Reedy: https://www.mediawiki.org/w/api.php?action=query&list=backlinks&bltitle=Template:Archived_extension&bllimit=5000&blnamespace=102 [21:58:34] too many translation results in there [21:59:43] I'm guessing not... It'd be a post processing thing and pagination [21:59:46] btw, use bllimit=max [21:59:55] So you don't have to hard code the numbers [22:00:14] quarry? [22:01:34] Meh. Alright, thanks... [22:06:18] andre__: unless [22:06:49] You can use it as a generator for another module... [23:27:11] Hi [23:27:21] Is mediawiki really the software used by wikidata? [23:27:32] Because its modification interface seems to be less liberal, more specific [23:27:55] unlike, for instance, the wiktionnary, which has an interface as liberal as wikipedia, yet for a way more specific standard format of pages [23:28:07] Btw, why wikidata is untyped? [23:28:08] galex-713: wikidata uses extension called Wikibase: https://www.mediawiki.org/wiki/Wikibase [23:28:26] For instance, in the “languages” section, you can put something else than a language [23:28:29] that is error prone [23:28:45] SMalyshev: okay… is this the right place to ask about this extension? [23:29:00] galex-713: #wikidata channel may be a better place [23:29:37] okay, thank you [23:29:45] SMalyshev: even if it’s about a software detail? [23:29:57] like the possibility of this or that, enabled by wikibase? [23:31:37] all of that validation etc is done ontop of mediawiki [23:31:40] still better on #wikidata but you can ask here if you want :) [23:42:14] !apiclient [23:42:14] https://www.mediawiki.org/wiki/API:Client_code