[00:00:12] um [00:00:32] Krenair, yes: http://wiki.novawebdev.webfactional.com/index.php [00:00:40] you should get something like this: https://aa.wikipedia.org/wiki/Special:Version [00:01:02] that looks pretty broken.. [00:01:19] very broken [00:01:34] i had that magicword problem before, but I found a patch that fixed it for me [00:02:15] you can get the version from javascript though [00:02:18] wgVersion [00:02:18] "1.20.2" [00:02:48] nicky22, I saw something about that, but it said that version 1.19 made the patch unnecessary. Since I'm at 1.20.2 I didn't want to go there without checking. [00:03:21] you realise that 1.20 was EOL before the end of 2013 right? [00:03:32] Krenair, Yeah. I already mentioned that in the beginning of my question. And PHP is at 5.2.17. [00:03:47] Hmm.. I had this on 1.21.3 and the patch fixed it for me [00:03:48] oh [00:03:51] right, yeah [00:03:53] that won't work [00:03:57] Krenair, I'm not the guy who set it up, nor the regular maintainer. [00:04:11] minimum PHP version was 5.3.2 [00:04:20] https://gerrit.wikimedia.org/r/#/c/122450/1/includes/MagicWord.php this is what i used [00:05:29] nicky22, Methinks I'm going to try that. And, Krenair, though I'm not sure WebFaction will respond, I'll try lobbying for a newer PHP so that I can upgrade. [00:05:59] yeah, that patch would've been released for 1.19 and 1.21, not 1.20. You can probably manually apply it to 1.20 but it won't be supported [00:06:01] you might aswell try, you can always revert if that doesn't work for you. [00:19:12] Krenair, nicky22, That was sufficient to fix it... for now. [00:19:21] Krenair, nicky22, thanks. [00:19:27] Did that resolve your login issue? [00:20:37] Krenair, the login issue was completely separate: I was attempting to register with www.mediawiki.org. It's a question for mediawiki's admins. [00:20:46] Oh, I see. [00:20:53] Yes, mediawiki.org is hosted by Wikimedia [00:21:00] And so has integrated login with the rest of Wikimedia's wikis [00:21:44] e.g. https://www.mediawiki.org/wiki/Special:CentralAuth/Krenair [00:21:58] you'll find mediawiki.org listed, along with several hundred other sites [00:22:40] if I change my email, password, etc. on any one of those, they're all changed [00:22:45] Krenair, yeah. I'm just wondering where else I've registered (or who else has the same name). Will the URL you gave work for me to find my doppleganger? [00:22:49] my global rights apply on all of those [00:22:58] Yep [00:23:14] If you put the username into the form at the top and submit, it will show you where the username is registered [00:24:14] And yes. I see it's me. Now to see if I have the password squirreled away. (Almost assuredly yes.) [00:24:48] if not you can try resetting the password [00:24:54] assuming you set an email address [02:18:31] Is the REST API publically accessible? [02:19:10] Yes. [02:19:15] the api should work unless the administrator turned it off [02:19:39] https://rest.wikimedia.org/en.wikipedia.org/v1/?doc [02:19:45] I'm looking at the doc and getting confused about how I actually send a query. [02:19:49] via wgEnableAPI = false [02:19:59] Krenair: REST probably means RESTBase? [02:20:07] Oh, maybe not. [02:20:14] Katie, I don't think so, most wikis won't have that [02:20:17] ResMar: Do you mean api.php? [02:20:22] https://rest.wikimedia.org/en.wikipedia.org/v1/?doc#!/Transforms/transform_html_to_wikitext__title___revision__post [02:20:33] What are you trying to do? [02:20:44] You probably don't need rest.wikimedia.org. [02:20:56] But both api.php and rest.wikimedia.org are public, as far as I know. [02:21:02] It provides capacities that the api.php lacks. [02:21:09] In particular I'm looking at HTML-to-wikicode conversion. [02:21:27] How do I work that into a script? I don't see any obvious instructions on where to send queries. [02:21:57] Like, give me an example of a working URL request. [02:22:04] You realise that this is #mediawiki, right? [02:22:29] I could ask this question on any of the tech channels and someone will eventually answer it. [02:22:34] This channel is fine. [02:22:37] RESTBase is an external, entirely optional and not at all bundled with MediaWiki service [02:22:39] You loiter on all of them anyway. [02:22:41] Wikipedia runs MediaWiki. :-) [02:22:55] Doesn't matter which pie I put my finger in. [02:22:58] I wouldn't call it too external. [02:23:11] I would call it very external. [02:23:19] How? [02:23:24] Okay, but the people working on it are Wikimedia Foundation employees. [02:23:34] It's explicitly developed for VisualEditor and Parsiod. [02:23:36] Parsoid* [02:23:38] And that. [02:23:41] Anyway, this doesn't matter. [02:23:43] What are you trying to do? [02:23:47] It's not even a MediaWiki extension [02:24:02] ResMar: Why are you converting HTML to wikitext? [02:24:02] Oh boo hoo hoo. [02:24:20] I do the opposite in a few scripts. [02:24:31] Katie: I am interested in exploring the capacities of the API, though I have a few concrete usecase examples. [02:24:31] Because HTML is significantly saner to work with than wikitext. [02:24:48] Disagree. [02:24:48] Well, there's Special:ApiSandbox to explore the capacities. [02:24:50] !apisandbox [02:24:55] Parsing links is much, much easier in wikicode. [02:25:07] Useful bot we have. [02:25:07] No it isn't. [02:25:10] wat [02:25:24] Wikitext links can contain all kinds of craziness. [02:25:33] is vastly simpler. [02:25:36] Hmm. [02:25:47] Ok I see your point. [02:25:47] Anyway, if you talk about your actual use-cases, someone may be able to help. [02:25:53] m: vs. :m: vs. meta [02:26:06] But I'd like to ask a simpler question first. [02:26:09] Which is how do I access this API? [02:26:22] Or [[{{template that includes "m:"}}page]]. [02:26:29] Via a data request, not a sandbox. [02:26:36] Or [[{{#if:m}}]] [02:26:44] Edge cases [02:26:55] You make an HTTP POST request to the API. [02:27:04] It lives at https://en.wikipedia.org/w/api.php [02:27:11] You send an HTTP POST request there and you'll get a response. [02:27:25] It's the response body content you care about. [02:27:40] But you generally don't need to go that low level. There are libraries and frameworks that have functions that simplify this. [02:27:53] https://en.wikipedia.org/wiki/Special:ApiSandbox [02:27:54] So that instead of sending raw HTTP requests, you send like "getWikiText(page)" or whatever. [02:28:09] But if you want to send raw requests, that's also fine. [02:28:14] Yeah well...to be honest I find those frameworks too enveloped to delve into. Besides, it's fun to work out the system yourself. [02:28:16] You can use curl. [02:29:03] So are you saying that this is more complicated than going to a URL? [02:29:11] You can go to a URL. [02:29:21] The API also accepts GET requests. [02:29:22] Again...how. [02:29:26] Parts of it, anyway. [02:29:29] That's the MediaWiki API. [02:30:04] Sure. [02:30:29] There's no dedicated sytem for RESTBase access? [02:30:40] So for instance HTML coversion is only available via: https://en.wikipedia.org/w/api.php?action=help&modules=flow-parsoid-utils [02:30:43] ? [02:31:18] If so I don't understand why there's an entire seperate sandbox and documentation page that's been set up [02:31:19] I imagine you'd bypass api.php in this case. [02:31:31] If you want to interact with RESTBase. [02:31:35] You'd just interact with it directly for now. [02:32:12] O...k...uh... [02:33:18] So to do API queries, or at least all of the ones that I have done so far, I have constructed strings and then sent them against `api.php?` using the Python requests library. [02:33:24] And then worked with my back-put. [02:33:35] Right. [02:33:42] So instead of using api.php as your base, you'll use... [02:33:47] https://rest.wikimedia.org/en.wikipedia.org/v1/ [02:33:55] Example: https://rest.wikimedia.org/en.wikipedia.org/v1/page/title/Barack%20Obama/ [02:34:19] Is it impossible for them to have said this somewhere in their documentation? [02:34:36] It's documented here, kind of: https://rest.wikimedia.org/en.wikipedia.org/v1/?doc [02:34:41] There are GET sections. [02:34:44] With some "Try it out!" buttons. [02:34:49] And then there are POST sections. [02:34:58] Which also have "Try it out!" buttons. [02:35:33] This for instance fails: https://rest.wikimedia.org:443/en.wikipedia.org/v1/transform/html/to/wikitext [02:36:06] Fails how? [02:36:17] File not found [02:36:19] what are you posting to it, and what does it return in response? [02:36:25] You have to send a POST request. [02:36:30] With the HTML you want to turn into wikitext. [02:36:32] Did you do that? [02:38:13] POST methods only huh. So I assume the browser doesn't do it. [02:38:30] But then I don't understand what the GET output for your second example request is either. [02:38:43] I see items and then a lot of numbers. [02:39:15] "Lists revisions" [02:39:19] Revision IDs then? [02:39:51] But only ones stored in RESTBase...whatever that means. [02:39:56] is there a php code that can do a ctrl f5 ? [02:40:07] because my counter is only refreshing when i press ctrl f5 on chrome , it works fine on IE [02:40:19] ctrl-f5 = refresh with cache? [02:40:28] i guess so [02:40:31] ResMar: which functionality or content are you looking for? [02:40:33] *without [02:40:40] since it only updates my counter when i press ctrl f5 [02:40:46] when i press f5 it stucks at 0 [02:40:51] on chrome and ff [02:41:07] There is probably a way to force your php not to use cache, rather than trying to do a refresh without cache [02:41:26] gwicke: Fine, is there a LIGHTWEIGHT library available for Python for handling RESTBase API queries? [02:41:31] ae. not named pywikibot [02:41:42] ResMar: any HTTP client will do [02:41:56] requests? [02:42:03] Because I'll be honest, it was easier for me to rewrite my own API request methods than it was to figure out what the heck is going on within pywikibot. [02:42:15] http://docs.python-requests.org/en/latest/ [02:42:31] do u know any? [02:42:35] because i already have this line [02:42:44] $parser->disableCache(); [02:42:51] Take a look at https://phabricator.wikimedia.org/P850 ResMar [02:43:24] particularly lines 84-89 and 113 [02:43:24] *whistles* [02:43:52] grr_: honestly, I know very little php [02:44:08] is ok. thanks though =) [02:44:10] but some people online say you can set some kind of header [02:44:14] i did [02:44:19] damn [02:44:19] apparently not working [02:44:32] If all requests are that complicated between a lack of understand of what's going on (sorry, not a CS major) and the complexity involved and the (typical) lack of a good high-level walkthrough, I doubt it'll be worth the effort for me. [02:45:15] Now, I've not done POST requests before, but I can't imagine they're much worse than GET, and I hope that Requests handles them. [02:45:25] I don't have a CS degree either [02:45:52] most of that paste is the ssl cli client being quite loud [02:46:20] > requests.post(url, data=None, **kwargs) [02:46:37] > Sends a POST request. Returns Response object. [02:46:40] Sounds like business. [02:46:41] ResMar: Do you understand the difference between HTTP GET and HTTP POST requests? [02:46:56] Not at all, I'm sure I could read up about it on...Wikipedia... [02:47:04] Yes. [02:47:08] Beyond that one takes data and the other posts it. [02:47:16] a post request sends data with the url, no? [02:47:22] GET requests are the equivalent of hitting enter and visiting a URL in your browser. [02:47:23] And that there's a third request method of some sort. [02:47:29] A POST request is like a form submission. [02:47:37] So your browser does POSTs all the time. [02:47:40] Every time you make an edit. [02:47:49] Or change your user preferences. Or search for something. [02:47:58] And you make a lot of GETs by browsing around. [02:48:41] Well sure I know that much [02:48:42] GET requests are supposed to be idempotent and not change state on the server. [02:48:47] But that's a high level description. [02:49:00] POST requests generally do change state on the server. [02:49:01] For me it's more a question of "How much worse is it going to be to make a POST request, than it is to make a GET one?" [02:49:15] They're both simple using the requests library. [02:49:23] Well there you go [02:49:49] You should try to be a bit less agitated and whiny. [02:49:59] It'll make people want to help you more. [02:50:05] ez. i've never used the requests library before and i've figured it out just by looking at the docs. [02:50:15] If you don't like the current documentation, write better docs. :-) [02:50:16] They have good docs [02:50:22] Unlike say...pywikibot... [02:50:28] If you don't like the current documentation, write better docs. :-) [02:50:38] Click edit. It's a wiki world. [02:50:42] See [02:50:56] But that assumes that I understand the tech at a masterfully low level. [02:51:12] A guy falls into a hole, see. [02:51:28] Which is a different thing from arriving there looking for a way to get something done, and being frustrated by a document written for people at the same level as the writer. [02:51:29] any idea how to get the author of the created page/article in mediawiki ?? [02:51:35] Or documentation that's just missing in places. [02:51:46] four: ummm [02:51:46] four: You can query the first revision of the page. [02:51:52] Using api.php. [02:51:54] ^ [02:52:11] ResMar: Patches welcome. [02:52:21] Patches? [02:52:38] Software patch. It's a thing. [02:52:45] I can barely use the software, how could I patch it? [02:52:50] ResMar: As they say on "House of Cards," you're entitled to nothing. :-) [02:53:04] You seem to have some grand sense of entitlement and expectation about how things should be. [02:53:13] Which is fine, I guess, but you'll then have to help make it that way. [02:53:26] Wow am I getting into an argument on the Internet? [02:53:36] hmm ok .. [02:53:45] four: https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query&prop=revisions&format=json&rvprop=user&titles=User%3ASn1per [02:53:50] you can probably fiddle with the date parameters [02:54:05] You can change the sort order. [02:54:13] To sort by oldest revision timestamp. [02:54:20] And then limit to 1 entry. [02:54:54] Oh and use formatversion=2 [02:55:04] so i have to use sql ? [02:55:10] No. [02:55:13] You can use api.php. [02:55:19] four: Did you click GEOFBOT's link? [02:55:26] yup i did [02:55:30] looking @ it now [02:55:34] Okay. [02:56:45] four: using Katie's suggestion gets the first revision of my user page: https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query&prop=revisions&format=json&rvprop=user%7Ccontent&rvlimit=1&rvdir=newer&titles=User%3ASn1per [02:56:55] and you can look at who made the revision and you get the creator [02:56:58] four: Just a small note, when you actually build your request I recco you include a `formatversion2` parameter, because otherwise you're going to have to go through a dictionary layer with an effectively random key---the pageid---to get to the data of the request. [02:57:12] Sorry, `&formatversion=2` [02:57:27] wow looks complicated [02:57:36] i didnt know i haveto fiddle with the url i hate this kind of stuff [02:57:51] actually what i want to do is , since i already retrieve the page title out [02:58:00] beside it , i would like to display (Created by XYZ) [02:58:06] still have to do the above u all mentioned? :) [02:58:32] you don't have to fiddle with the URL [02:58:50] APIsandbox gives you a nice interface to play with the parameters, then you get a finished URL [02:59:00] or you can use the parameter values to build a request [02:59:24] Here's a complete example: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&format=json&continue=&formatversion=2&rvprop=user|content&rvlimit=1&rvdir=newer&titles=User%3ASn1per [03:00:24] alright, will look at it . thanks for the help guys [03:02:43] See, I feel like you can't get that easily from the documentation. [03:03:00] I didn't know about formatversion=2 until I was 500 lines into the code I was writing. [03:03:17] Why does it matter? [03:03:56] Without it there's an additional key in the dict, the pageid, which is essentially a random number. [03:04:36] can't you just grab the first item in a dictionary? or am i an idiot [03:04:50] Well I don't know about other languages [03:04:57] But in Python the dict is randomly sorted [03:05:07] And json.loads() arbitrarily resorts it AFAIK [03:05:24] It's not randomly sorted, it's unsorted. [03:05:27] But yeah. [03:06:06] I got past that key by listing the keys with .keys() and then picking it out of the lineup. [03:06:42] Sure. [03:06:42] It was poor work done all the way back at the beginning of the API's history that's loitered around since then for compatibility. [03:07:19] You can think Anomie for fixing that. :-) [03:07:25] Err, s/think/thank/ [03:07:38] Now that graphs are enables [03:07:40] enabled* [03:08:03] I am looking forward to every wiki writing its own local module encapsulations. [03:08:18] The de.wp Module:Bar chart! The en.wp Module:Bar chart! Oy... [03:08:38] Someone should write a bot to deal with it until interwiki transclusion finally shows up. [03:10:00] Sometimes I think about all the work that went into Template:Convert that was obliterated the moment Lua was enabled, and shudder. [03:11:36] Sure, but we're still much better off now with Lua. [03:12:11] hello [03:12:28] hi [03:12:46] Yeah I know [03:12:47] I mean [03:12:57] It's a no brainer; why did it take so long to get Lua installed? [03:13:13] Things are easier to see in hindsight [03:13:30] I'm suspicious of it. [03:13:38] It was a known problem the moment parsers showed up. [03:14:05] Well, maybe not the *moment*, but very quickly... [03:15:48] Same issue with SUL [03:19:25] Wikimedia has become...a large corporation. [03:20:49] Interestingly, when one googles "Wikimedia," an unsuccessful logo entry in the Wikimedia Commons logo competition is displayed [03:21:07] I tend to refer to it as the Wikimedia movement [03:21:19] Distinguished from the WMF and from Wikipedia itself. [03:23:38] ResMar: a large.. corporation? [03:24:08] doesn't feel like a corporation to me. i submit a patch, people tell to fix it or get rid of it, I do what they say. [03:25:52] Internal politics are consistently the least read part of the Signpost, but the one I find most interesting in writing my bit. [03:26:07] i love reading about internal politics [03:26:19] hi unicodesnowman, haven't seen you in a while [03:26:29] unicodesnowman: I still love your nickname. [03:26:40] hi GEOFBOT and ori :) ty [03:26:50] I know more about Erick Moller leaving than I could possibly let on in a neutral venue, and there is a lot more that I don't know, but can guess at. [03:27:01] *Erik [03:27:37] Mo^(**)ller [03:28:02] unicodesnowman: did you acquire your nick before or after VE used those snowmen? [03:28:17] visualeditor uses snowmen? [03:28:23] I thought it used pawns? [03:28:31] it started with snowmen [03:28:42] these days it's an entire zoo afaik [03:28:44] slippery slope! [03:28:45] gwicke, way before! [03:28:52] lol [03:29:15] unicodesnowman: ah, cool ;) [03:33:14] is it just me, or does the register feel like a tech tabloid? [03:33:25] it's not just you [03:34:35] I read Ars Technica personall.y [03:34:39] personally*. [03:34:45] "wealthy administrators at the Foundation devise schemes to spend the cash" [03:34:55] But even that I find gets on my nerves sometimes. [03:35:14] How many times can you reiterate how much you hate Comcast/Verizon/etc. etc. etc. [03:35:32] I get it, telecoms are E-E-E-E-E-V-I-L. Some real news, please... [03:37:28] Is slashdot still any good? Thought I might try it at some point. [03:37:48] I find it difficult to navigate, though. [03:38:27] reddit and hacker news have take over some of that audience [03:38:30] news.ycombinator.com and reddit.com/r/programming, highscalability.com, are decent [03:38:35] :) [03:39:27] Would a melting radioactive core would defy gravity and go straight through the Earth? [03:40:52] if you are looking for light entertainment, http://www.theonion.com/ can occasionally be fun [03:41:06] because the register apparently thinks fukushima is going to destroy the falklands [03:41:15] The Onion is great. [03:41:23] and the brits have a "secret evac cover-up" [03:41:28] not always equally, but often. [06:21:42] can anyone help me spot the error ? i am trying to do a query on database [06:21:51] i tried this (https://dpaste.de/RLXS) on phpmyadmin and it gets the result i want [06:22:08] then i tried to implement (https://dpaste.de/Xo3x) this to my php file, but it is giving me a blank page [06:22:15] anyonw knows where went wrong? [11:08:52] hey hey hey [11:09:46] is there anyone who could suggest a good theme for uh.. "static webpages" with mw? [11:10:12] we used to use GuMaxDD but I recently upgraded from 1.15 to latest, and it won't activate.. [11:13:49] mjau^, https://www.mediawiki.org/wiki/Category:All_skins [11:14:49] andre__: thanks mate, I've looked at some of those but.. I dunno [11:16:00] trouble finding something I think will work [12:34:24] i am trying to implement single sign on to my mediawiki and has made use of the extension remote_auth user [12:34:46] i am using apache (fastcgi/cgi) and the module (mod_sspi) for the single sign on [12:34:58] is the mod_sspi compatible with cgi fast cgi ?? [12:35:12] because it is working on my apache2handler but not working on cgi/fastcgi [12:35:16] am i missing out anything [12:38:05] is this a right place to ask? :) [12:38:13] urms....... [12:38:26] since related to mediawiki ? :p i hope someone knows though :/ [12:40:56] you have posted to serverfault, right? [12:41:09] just did but dun think anyone will reply though :/ [12:41:22] good question, it is not certain it works [12:41:36] oh [12:41:38] meaning ? [12:41:38] as far as I remember SSPI auth requires some additional request turnaround, correct? [12:41:51] i guess so [12:42:11] are there any other methods to configure sso for my mediawiki using windows credentials ? [12:42:12] :/ [12:42:15] (you can fix the formatting on your post - the config directives are glued) [12:42:36] yes, I think one of the ldap/ad extensions [12:42:45] will do fixed, gimmi a min [12:43:14] just fixed [12:43:18] for the ldap/ad extension [12:43:22] i did configure [12:43:28] i am able to login with the windows credentials [12:43:33] but it is not auto-logging me in [12:43:47] so i guess it is not working the way i want ? :/ [12:45:25] I don't know, I am not an expert on NTLM [12:46:17] ahhhhhhhhhhhhhhhh [12:46:23] been stucks for days , tired of t [12:46:27] thanks though :) [12:48:14] by the way [12:48:17] is there any way [12:48:29] to notify the admin that an article has been created [12:48:36] based on namespace [12:48:52] for example, Welcome: namespace? [12:50:12] !extension:email_notification [12:51:29] u mean this ? [12:51:30] https://www.mediawiki.org/wiki/Extension:Email_notification [12:51:57] should do the job [12:52:58] hmm [12:53:07] thanks saper :) [12:53:10] i hope it does.. [12:53:29] you can also use the RSS feed https://pl.wikipedia.org/w/index.php?title=Specjalna:Nowe_strony&feed=atom&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter= and external database script http://cybergav.in/2012/03/31/new-page-notification-in-mediawiki/ [12:54:02] rss feeds?!? [12:54:07] how to achieve using rss feeds?!?1 [12:58:54] just use RSS reader to get notified [12:59:14] and there are ready tools to convert RSS to email, like feedburner or something [12:59:39] will look into it.. [12:59:42] thanks [13:49:52] everytime I upgrade, JS breaks for a few hours. Any ideas whay would cause this? [14:21:42] Is there a magic word that corresponds to the name of the current user viewing the page? [14:22:53] Not by default. It'd be cache-busting :) [14:22:57] Might be an extension [14:23:10] Fair enough. Thanks! [14:32:20] Nemo_bis: Why isn't there a way to disable the help links without using CSS? :P [14:38:42] hey guys, currently on my main page , i am displaying the recent articles (1. title1, 2.title2, 3.title3) [14:39:05] is it possible to remove (for example 1.title1) if it has been updated? [14:39:18] i am using the newestpage extension to display the recent articles? [14:39:29] or is there any other better methods? [14:54:36] Reedy: because CSS is superior to the overcrowded PHP/DB configuration :) [14:54:52] Why? [14:55:02] So we can send images and css to the user they don't need? [14:55:05] On every request [14:55:34] How much, 1 KB? [14:56:05] Needing to disable the icons is a corner case, why worry about performance that much? I'd rather worry of technical debt. [14:56:38] Amuses me the fact that dewiki disabled it [14:58:21] Reedy: they had their own local impementation of the same feature, whatever. Just one wiki anyway. [14:59:22] That speaks for MediaWiki core development getting faster to avoid local hacks. :) It took over 3 years for core to follow Translate's lead. [15:00:15] Maybe we could modify SpecialPage::addHelpLink() so that when the message contains an hyphen (or something) the link is not displayed at all. But the message isabling logic is a mess and changes constantly, plus there isn't any demand for the feature currently. [15:00:41] don't we have a ->isDisabled() or similar? [15:04:17] Sure, that's what we currently use [16:04:34] Hey. I know it's not the greatest Idea to do this with mediawiki, but my boss want's it and there isn't much I can do against it. [16:06:07] i need to make it possible for students who get a shared account with a password they know from the lecture to log into mediawiki and be able to read everything, but only be able to edit one article where they can ask questions. what would be the best way to do this? create a namespace for that one article? [16:06:47] (there are 8 normal accounts for editors and that one "student"-account) [16:07:55] Create a Question namespace and give student accounts permissions for that namespace [16:08:21] you could even have the students create new pages in the namespace for each question and link to those pages in a main page or something [16:09:37] mh...that sounds good. so, create a namespace, and a usergroup, give them that permission, and somehow revoke editing permission for the main-namespace from them? [16:10:34] qgil_: Hello :) I'm wondering, has there been an assessment of the "buddy system" that was set up for the Lyon hackathon? Do we know how well it worked, and if things will be different at Wikimania? [16:11:29] !link Manual:Preventing access | Toaster58 [16:11:29] Toaster58: http://www.mediawiki.org/wiki/Manual:Preventing_access_ [16:12:18] Thanks! I'll try and report back :) [16:12:35] !wg NamespaceProtection | Toaster58 [16:12:35] Toaster58: https://www.mediawiki.org/wiki/Manual:%24wgNamespaceProtection [16:13:11] hi guillom [16:13:39] hi! [16:14:01] guillom, rfarrand got to some conclusions but they haven't been documented yet. [16:14:12] guillom, still, they have influenced our goal for Wikimania: https://phabricator.wikimedia.org/T93070 [16:14:41] guillom, instead of focusing on "buddies for everybody" we are focusing on buddies for all newcomers and for whoever wants one [16:15:08] guillom, is this good enough? [16:15:18] qgil_: Thanks. One think I was thinking of was "revolving buddies", i.e. maybe having "themed tables" where people can come if they need with VisualEditor, or with JavaScript, or with extensions, etc. [16:15:23] guillom, https://phabricator.wikimedia.org/T93070 [16:15:24] thing* [16:15:51] ^^^ is the buddy task and your feedback is welcome there [16:15:56] I imagine that someone could have multiple needs during the hakathon, and a single buddy might not be able to help with everything. [16:16:05] Thanks :) [16:16:07] I'm not going to Wikimania, so you'd better share this with Rachels and Siebrand [16:16:12] Unless everyone was paired with a Roan cloan [16:16:13] Oh :( [16:16:25] hi Reedy ! [16:16:27] qgil_: Sorry to hear that. [16:16:36] Reedy: My point exactly. [16:16:56] guillom, I will having 2 week vacation in Crete, so I will be very sorry too ;) [16:17:03] Hah! [16:17:29] qgil_: I visited Crete years ago; it was amazing. I hope you have a great time :) [16:17:51] guillom, your idea about buddy/tables is good [16:17:59] I'll add it to the task. [16:18:31] guillom, Crete, noted. :) [16:24:18] Toaster58: A simpler solution would be to protect the pages you don't want students mucking about with, and leaving the rest open to them [16:30:53] * Ulfr grumbles about wishy washy people who keep changing their minds about what stuff should say :| [16:31:33] Anyone know of an easy method of importing strings into a special page that can be modified within the mediawiki interface? [16:31:47] importing strings? [16:32:03] You mean MediaWiki messages? [16:32:14] I've got a little question object that takes an array of strings and generates a form with them [16:32:18] on a special page [16:34:05] the actual answers are always the same, but the number and question prompts will vary. I was planning on storing them in a database but creating an interface to modify that stuff would be time consuming [16:36:58] If mediawiki messages can do that for me I would be the happiest camper, can you direct me to documentation on them? [16:40:45] !page Help:System message | Ulfr [16:40:45] Ulfr: http://www.mediawiki.org/wiki/Extension:PageCSS [16:40:51] oops [16:40:58] !link Help:System message | Ulfr [16:40:58] Ulfr: http://www.mediawiki.org/wiki/Help:System_message_ [16:42:12] I might be able to swing that, how does one import the contents of this magical message onto a special page? [16:42:26] (if it's RTFM time let me know :D) [16:44:33] Ulfr: you are using php? [16:44:41] Yassir [16:44:52] then its simple [16:44:56] !link Manual:Messages API| Ulfr [16:44:57] Ulfr: http://www.mediawiki.org/wiki/Manual:Messages_API [16:45:11] GEOFBOT: Thanks! Reedy: Well played [16:45:26] remember to declare which messages you're using in the extension resource loader file thingamajig [17:10:52] Ulfr: Thanks a lot, i did it via the first method. [17:11:42] GEOFBOT: Thank you, it worked :) [17:11:54] you're welcome :) [17:15:58] * SamB_laptop is getting NS_ERROR_DOM_QUOTA_REACHED on English wikipedia, seeks advice (using iceweasel 31.7.0esr-1~deb8u1) [17:26:15] SamB_laptop: probably https://phabricator.wikimedia.org/T66721 [17:37:56] legoktm: and what is the offending key called [17:40:20] SamB_laptop: it is multiple keys [17:45:36] legoktm: hmm, the modules get stored in multiple keys? [17:46:37] SamB_laptop: yeah [17:46:49] do they at least have a common prefix? [17:48:55] oh, think I stumbled on it [17:50:25] so, um, it seems to include the DB nickname or whatever in the key name? why's that, when the key's scoped to the domain for that wiki anyway? (Is it in some way related to google translate?) [17:52:21] and any tips on what I can use to clear them without clearing all my localStorage everywhere? [17:56:24] (Btw, the keys start with "MediaWikiModuleStore:") [17:56:46] SamB_laptop: I have no idea, I just know why it's full :P [17:57:09] I blame wikidata [17:58:01] (wouldn't be looking at pages on wikipedias for dozens of languages I can't actually read otherwise, now would I?) [17:59:20] Which ways are there to reduce data size in MediaWiki? using templates, compression for database, deleting revisions, ... [17:59:29] my DB has become quite large :-( [17:59:49] from those mentioned 3 I'd probably prefer the last one, but only for specific accounts [18:02:49] templ. could increase load [18:03:03] I remember reading something problematic when turning to compression, when having no compr. before [18:13:35] Erkan_Yilmaz: Definitely should delete old revisions, compression is bad news bears [18:17:01] thx for the feedback Ulfr [18:18:34] How is information about template transclusion stored in the database? [18:19:06] What do you mean by stored? [18:19:06] There's tracking of what pages use what templates [18:19:10] Like, what would the query be that would give me a list of all pages transcluding a certain template. [18:19:23] Why are you reinventing the wheel? :P [18:19:27] There's the API for this [18:19:48] Is the information not directly ascertainable from the database the same way links are? [18:19:56] Yes [18:20:09] But I'm wondering why you're wanting to build the sql queries yourself [18:20:27] With my scripts I prefer to query the database directly. [18:20:45] https://github.com/wikimedia/mediawiki/blob/master/maintenance/tables.sql#L497 [18:21:06] thank you! [18:27:31] which is more stable: the API to query template usage, or the layout of the relevant data? [18:32:17] hi [18:32:39] is there any documentation about how passwords are encrypted except of https://www.mediawiki.org/wiki/Manual:User_table#user_password ? [18:33:09] H_V: what else would you like to know? [18:33:24] what is another key, what is another string? [18:34:21] how's your PHP? :) [18:34:32] SamB_laptop: Waht do you mean? [18:36:36] Reedy, i'm not a professional but I may understand some basics [18:37:02] https://github.com/wikimedia/mediawiki/tree/master/includes/password [18:38:35] SamB_laptop: The database tables and such won't change very often [18:38:40] ok, first things first [18:38:46] this is the earth [18:38:50] it's roooouuunnnnddd [18:38:54] the manual says 'length (of what??)' -- it's the length of the derived key [18:39:06] !bug 1 | ori [18:39:07] ori: https://bugzilla.wikimedia.org/1 [18:39:19] per https://en.wikipedia.org/wiki/PBKDF2 : PBKDF2 applies a pseudorandom function, such as a cryptographic hash, cipher, or HMAC to the input password or passphrase along with a salt value and repeats the process many times to produce a derived key, which can then be used as a cryptographic key in subsequent operations. [18:39:32] 128 is the default length of the derived key [18:39:47] equivalent to dkLen in that wikipedia article [18:40:02] * Reedy hugs ori [18:40:44] the dkLen parameter is also explained in https://tools.ietf.org/html/rfc2898 [18:40:54] * ori re-hugs [18:41:02] !hug 1 | reedy [18:41:44] that's beyond me, but thank you [18:42:55] sigh [18:42:59] i was still working on an explanation [18:57:21] ori: if it makes you feel better when I was researching password stuff last week MOST of it made sense [18:57:30] but I had a little crypto experience [18:57:37] * Ulfr opts not to say which form of experience [18:58:25] Reedy: Excellent end of ze world reference btw [19:20:25] is this documentation still valid? https://www.mediawiki.org/wiki/Extension:EnforceStrongPassword [19:20:48] csteipp, ^ [19:20:58] Latest version: "0.2 (2007-03-07)" - I doubt it [19:23:44] Krenair: I doubt it as well, but at the same time it seems kinda like fairly core functionality that people wouldn't mess weith [19:29:26] is there a patch to fix instantcommons? [19:30:10] Ulfr: I don't know the author of that Extension, so I'm not sure. I don't think anyone at the WMF has looked at that extension. [19:30:51] The hook will need a slight modification in 1.26 [19:30:55] csteipp: Well the people that have repeatedly asked if there's any way to deliver content on a website that can't be kept locally are asking me to increase password complexity requirements. What might the best way of doing that be? [19:31:18] Ulfr: Are you running master? Or 1.25? [19:31:37] csteipp: I can upgrade to 1.25, think it's like 1.24 or something at the moment [19:31:56] * Ulfr has to kill a small forest any time I want to alter this particular wiki [19:32:58] Ulfr: I just got an improved (imho) password policy framework merged a couple weeks ago, so 1.26 will be an improvement. Until then, adding the hook and something like Extension:EnforceStrongPassword isn't going to hurt you.. [19:33:43] * csteipp reads all the code [19:33:46] understood, so there's no real way you folks would recommend setting complexity requirements other than what I found at that link until 1.26 comes out? [19:34:07] Yeah, if you're ok with managing that in code, they're not doing anything crazy there that I see. [19:34:45] upgrading would probably be more work. it's an absolutely absurd amount of security and paperwork for a website that really just hosts people's work vacation photos [19:35:23] Ulfr: parent5446, iirc, had an extension that did password policies too [19:35:52] But yeah, until 1.26, it has to be done in an extension [19:36:06] eh, if the one I found will work and won't bork anything I'll just do that. I still have to figure out how to tell these people TLS is overkill for this site [19:37:02] thanks for taking a look csteipp [19:37:17] Nah I don't have an extension like that [19:37:21] Might be somebody else's [19:38:03] parent5446: hey! I didn't think you 'did' IRC [19:38:11] :P [19:38:34] I only started being consistently online about five months ago. Before that I never had a solid client. [19:43:31] Ah, https://www.mediawiki.org/wiki/Extension:SecurePasswords is what I was thinking of... which doesn't work on 1.24+, and is honestly probably overkill for what you're doing. [19:45:00] csteipp: Yeah, I left my tinfoil hat at home [19:45:03] that's a bit much [21:29:29] hi - would it be here that I ask questions about varnish for my 1.25 install? [21:29:47] someone can probably help... [21:31:34] great, let me formulate a decent question that doesnt waste people's time.... [21:45:01] 14 minutes later... [21:59:49] GEOFBOT, varnish is too awesome to formulate [22:00:13] hypergrove: just ask; we don't bite. [22:23:43] * Lord-Simon slaps hypergrove, to get him out of dream land... [22:36:53] Well, legend has it that hypergrove was trying to configure varnish's secret ANTI-NSA FEATURE! [22:37:18] Alas, the all-knowing NSA knocked him out with a dart from the nearest squirrel-robot [22:37:37] He had his memory modified to forget all mentions of such a feature. [22:38:30] Oh wait, there is a funny looking squirrel outside my window--*fwoop* [22:38:34] * GEOFBOT passes out [22:44:11] How would I go about making a new category that contains pre-existing categories? [22:44:36] Do I change all the pages in those sub-categories to become part of my new top-level category as well? [23:06:44] jollygood: no, that's not necessary [23:08:23] you just put the categories in it, and maybe put some articles in it if they're particularly relevant to the category as a whole rather than just the subcategories [23:12:36] Alrigh [23:12:40] Alright*