[00:32:41] If any of you aren't busy can you view this https://the-whiteboard.wiki/wiki/Template:Databox_Character laugh at my mistakes and tell me why it's not working [00:34:54] Not working? [00:35:09] the parameters are not going through properly [00:37:08] Some more {{!}} needed? [00:39:31] Reedy: any suggestion as to where? [00:43:12] general advice would be to simplify... Remove most of the unworking lines, and work with one at a time [00:44:12] I suppose [00:45:08] I suspect enwiki won't be much use, as it'll be mostly lua now [00:50:29] enwiki? [00:56:05] Hmm, it works if I don't use the if [00:56:23] but I need to hide it if there is no imput [01:00:16] for me {{!}} shouldn't be used inside an "if" for template separator. That probably will never work. Instead use the "if" to "break" the template parameter if the parameter is empty [01:03:31] What do you mean break the template parameter, is that not what I am doing [01:17:48] Pongles: not exactly. Instead of {{#if: {{{P|}}} | {{!}} P = {{{P}}} }}, use something like | P{{#if: {{{P|}}} | _ }} = {{{P|}}} [01:20:18] but if I do that it will create a blank cell causing the table to bloat [01:22:12] no because you'll pass an unknown parameter P_ instead of P to the parent template, that's the same as no passing that parameter at all [01:27:54] ahh, O see [01:27:57] *I [02:36:32] Oh, but part of the problem is that the if covers both the label and the data parameter [02:36:38] although I guess I could dupe them [04:21:50] looking to navigate uploading some code to gerrit [04:24:42] currently on a code page - https://www.mediawiki.org/wiki/Extension:ZoomableImages/Code ... would like to figure out how to request review [04:26:08] you have your diff in gerrit already? [04:39:09] no... just did a request ( https://www.mediawiki.org/wiki/Git/New_repositories/Requests ) [13:09:30] https://phabricator.wikimedia.org/T97159 This bug does not seem to exist and I am running a Mediawiki 1.28 (alpha) in my system. [13:11:10] amrits: Thanks for retesting! Feel free to add a comment in that task. Which you already did. :) [13:12:04] The task was created 18 months ago, so maybe the code was fixed in the meantime but noone knew that there is a task about it [13:12:48] andre__: Oh, then it's alright :) [18:26:35] so someone sent me a backup of a wiki that I had [18:26:56] i would like to restore a spcific page out of it [18:27:30] is there any other way other than installing the whole wiki and apache and php thing? [18:30:00] azi`: uh, what format is the backup in? you could probably just find the page you want in it and copy/paste it out [18:30:13] if xml [18:31:05] backup would imply sql tho, legoktm [18:31:54] arseny92: no, it could be an xml dump. [18:32:08] anyways, you can still copy/paste stuff out of sql dumps, with some pain [18:32:48] legoktm: sorry i should have said that as well. it seems that the whole wiki folder is gzipped [18:33:05] and inside there is an sql file with what I am assuming is the dump [18:34:43] azi`: if it's not too giant of an sql dump, I would import that into mysql, and then do a query to get the page text out of it [18:39:10] legoktm: ok but this implies I should be able to navigate the db structure right? [18:39:13] i have no idea how to do that [18:40:20] https://www.mediawiki.org/wiki/Database_schema [18:40:25] are you familiar with mysql basics? [18:41:34] yeah the basics [18:43:26] legoktm , would need several queries for each revision of the page, if not only the latest is desired [18:43:42] azi`: do you want just the current page content or all revisions? [18:43:55] no just the latest page [18:44:14] i was actually now able to start the wiki update script on a local server [18:44:31] but it seems that there are conflicts since the domain name & db names are not consistent [18:44:33] oh, if you're going to set up MW that's probably easiest than me helping you mess around in mysql [18:44:46] well, maybe not [18:45:13] yeah i am not sure at this point whas best [18:45:16] whats* [18:46:35] azi`: https://paste.fedoraproject.org/470127/28517914/raw/ that should be the basic sql to get the content [18:49:12] there are images and latex formulas in the page itself [18:49:22] i am too optimistic to think this is going to get rendered right? [18:51:54] latex requires the Math extension to be installed [18:52:17] images....if they're in the local wiki you're going to need to upload those to the new wiki. The good news is that those are typically just in the images/ directory [20:04:17] !class OutputPage [20:04:17] See https://doc.wikimedia.org/mediawiki-core/master/php/html/classOutputPage.html [20:05:11] !del class [20:05:11] Deleting users is very messy and not recommended, because this breaks referential integrity in the database (they appear in many different tables like users, edit histories, recentchanges, preferences, etc). A safe solution is to block the users, and possibly rename them with . You can also try [20:05:18] !class del [20:05:19] Successfully removed class [20:06:01] !class is See https://doc.wikimedia.org/mediawiki-core/master/php/class$1.html [20:06:02] Key was added [20:11:15] neat bot [20:49:00] hello [20:49:55] I am trying to install Parsoid on my Raspberry Pi (where there is configured a MediaWiki running under a Nginx server) [20:50:26] I have followed every line in the tutorial : https://www.mediawiki.org/wiki/Parsoid/Setup [20:50:48] But I am getting the error : W: Impossible de récupérer https://releases.wikimedia.org/debian/dists/jessie-mediawiki/InRelease Impossible de trouver l'entrée « main/binary-armhf/Packages » attendue dans le fichier « Release » : ligne non valable dans sources.list ou fichier corrompu [20:51:02] qcha: I'm fairly sure we don't build parsoid for arm/armel [20:51:04] Can I have some help ? [20:52:06] hmm, it says all [20:52:18] but it is complaining about armhf [20:52:35] qcha: You can try wget https://releases.wikimedia.org/debian/pool/main/p/parsoid/parsoid_0.5.3all_all.deb [20:52:36] I am sorry, I don't know what is armhf [20:52:40] and then dpkg -i to install it [20:52:50] ok I will try that [20:52:51] qcha: It's the cpu architecture of the rpi [20:53:13] ok, and so why I get this message according to you ? [20:53:33] well, it's in french [20:53:50] "Unable to find the entrance "main / binary-armhf / Packages" expected in the "Release" file: Invalid line in sources.list or corrupted file" [20:54:05] So it's to my original point about us not building for arm variants [20:54:13] I can't imagine parsioid on a pi being very performant anyway [20:54:33] Ok. Yes, it was just to test it [21:07:52] Where do I set rate limiting for how many accounts can be created from a certain IP within a certain time frame? Is there a doc/manual on MediaWiki someplace? [21:08:19] https://www.mediawiki.org/wiki/MediaWiki [21:09:20] !wg AccountCreationThrottle [21:09:20] https://www.mediawiki.org/wiki/Manual:%24wgAccountCreationThrottle [21:09:35] Thanks Reedy [21:36:12] does anyone know if there's a way to get google search involved in an internal wiki that I cannot open up externally? [21:37:14] if it's private, it can't be indexed by google. And the whole thing won't make sense anyway [21:37:49] makes sense to me [21:38:02] mediawiki search never seems to find anything [21:38:20] but yeah I had a feeling the CSE needed to be able to touch it from the web [21:38:35] of course [21:39:20] t4nk901, there's CIrrusSearch extension if you're prepared to install Elasticsearch [21:39:53] I already installed elasticsearch and elastica [21:39:57] didnt make a difference [21:40:10] we can find pages by their exact title but thats about it [21:41:23] err, are your searches being done via Cirrus? [21:41:49] and did you index your wiki? [21:42:31] hmm, hold on, been awhile since I set this up [21:42:38] I thought elastica was the engine but that sounds wrong [21:42:57] elastica is a requirement, but it doesn't search for you [21:43:04] oh screw [21:43:06] you need CirrusSearch [21:43:07] its WORKING [21:43:14] \m/ [21:43:28] lol [21:43:32] ok so, I am using cirrus [21:43:43] and it must have taken longer to index than I thought [21:43:50] and I just assumed it was still acting crappy [21:44:16] but yeah Im definately pulling hits from the articles' bodies now [21:52:23] Thanks guys! [22:29:28] Hi. Whenever saving a page after editing it with Visual Editor, the page will 'break' hooks from Tabber and Comments (the ones I noticed, so far). In this edit, you can see how it changed everything on its own, while my sole edit was adding a number in a table: http://dragon-mania-legends-wiki.mobga.me/index.php?title=Demonic_Dragon&curid=85&diff=92686&oldid=91250 There seems to be no edit [22:29:28] to the hook, but it kills the comment section and makes the hook visible on the page. Not only that, but all edits made through VE are marked as unpatrolled.. even though my edits are automatically patrolled. For now, I have disabled VE in the mainspace (still works in Category, User and Guide). The comments hook doesn't work when I re-load/refresh the page in my web browser. [22:29:29] What might be the cause of all this? [23:41:46] Why does mediawiki download LocalSettings through the web-based setup? Other platforms just put the file in place internally. [23:42:43] Because it's probably not a good idea that MW can arbitarily write to the root dir and create files [23:42:59] Then how can it upgrade itself? [23:43:21] It can't [23:43:23] And it doesn't [23:45:32] So, if there's a patch released, there's no supported method of just updating through the web UI? [23:46:03] No [23:52:37] Any reason for that other than just no time spent on it?